report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Palliative care is an important and emerging issue for health care providers, educators, and the general public. As medical advances increase life expectancy, more and more people suffer from chronic and progressively disabling diseases that require treatment for depression and assistance with pain and symptom management. Some recent studies have pointed to significant problems within the health care system that preclude the achievement of the best possible quality of life for patients and their families. Areas identified for improvement include education and training for health care providers, improved pain and symptom management, and access to appropriate and quality health care services. The Assisted Suicide Funding Restriction Act of 1997 contains a provision designed to focus federal funding on research, training, and demonstration projects that would address these specific problem areas. The act authorizes funding in a number of palliative care topics (see table 1) and directs the Secretary of HHS to emphasize palliative medicine among its research and funding priorities under section 781. Section 781 is within title VII of the Public Health Service Act, which authorizes numerous programs for health professions education and training. Section 781 was first funded in 1993 to conduct health professions education research in four broad topic areas related to (1) educational indebtedness, (2) effect of programs for minority and disadvantaged individuals, (3) extent of investigations and disciplinary actions by state licensing authorities, and (4) primary care. The Bureau of Health Professions within the Health Resources and Services Administration (HRSA) is the HHS agency responsible for administering grants funded under section 781 of title VII. The extent of palliative care instruction varies considerably across and within the three major phases of the physician education and training process. The first phase is undergraduate medical education—or medical school—where students typically receive 2 years of classroom, or didactic, instruction followed by 2 years of clinical training. The United States has 144 accredited medical schools. The second phase is graduate medical education—or residency training—where residents receive 3 to 8 years of clinical training in a medical specialty. The United States has over 7,700 accredited residency programs. The third phase is continuing medical education, which provides physicians who are already practicing medicine with the education and training necessary to maintain or learn new skills. Continuing medical education courses are provided primarily by medical schools and state medical societies, but such courses are also provided by medical associations and consultants. Throughout these three phases, a variety of formal accreditation and certification processes are used to test student competency and to judge the quality of instruction and training. Our review at medical schools showed mixed amounts of attention given to palliative care issues. Accrediting organizations have generally steered away from standards requiring instruction in topics as specific as pain management, preferring to leave such matters to the discretion of the faculty at each school. To determine the extent to which the schools addressed these topics, we surveyed all U.S. medical schools on seven palliative care topics. For each of the seven palliative care topics we asked about, at least half of the 125 U.S. medical schools that responded to our survey said they had some degree of required instruction. (See fig. 1.) Instruction in palliative care for chronic illness was required by the fewest number of schools (56 percent). For the remaining topics, the percentage of schools requiring the topic was higher; for example, over three-quarters required instruction in the topic of pain management for the terminally or chronically ill, and 94 percent required instruction in depression identification and treatment. Our survey responses showed that some schools have added these topics fairly recently. For example, 24 percent of schools reported adding pain management as a required subject within the last 3 years. (For a more detailed summary of our medical school survey results, see app. II.) Many schools reported a need to change palliative care instruction, particularly in the area of clinical training. Overall, 30 percent of schools reported a need to change their classroom curriculum in palliative care, and close to 50 percent reported wanting to provide students with more hands-on training experience in diagnosing and treating patients with pain due to chronic or terminal illness. Evaluation processes vary in the extent to which they measure students’ knowledge of palliative care issues. (See fig. 2.) The percentage of medical schools that reported testing competency in the topics we surveyed ranged from 36 percent for interdisciplinary health care for end of life to 72 percent for identifying and treating depression. Many medical schools also rely heavily on national examinations—the U.S. Medical Licensing Examination or the National Board of Osteopathic Medical Examiners’ exam—to evaluate student knowledge. A study is currently under way to examine the degree to which the U.S. Medical Licensing Examination tests student knowledge in end-of-life care issues and to develop a method to evaluate student performance on these test questions in the future. Our review also indicated that attention to palliative care issues in residency programs varied as well. Accrediting bodies at the graduate level generally require some specific areas of instruction, although, as in medical schools, the primary responsibility for curriculum and training content is assumed by the program director and faculty. Required topics of instruction, such as domestic violence, vary by specialty, and few specialties have requirements including specific palliative care topics.Because of the large number of accredited residency programs in the United States, we did not administer a survey similar to the one we developed for medical schools. We relied on existing surveys done by professional associations that asked residency programs to report whether the subjects of end-of-life care and suicide were included in their training programs. The American Medical Association’s (AMA) 1996 survey showed that nearly half of the nation’s 7,787 residency programs include instruction in end-of-life care and over a third teach issues related to suicide. While historical data on the subject of suicide prevention are not available, AMA’s data show greater numbers of residency programs now offer instruction in end-of-life care than in the past. In 1996, nearly 50 percent of residency programs taught end-of-life care, compared with 38 percent in 1994. To some extent, the percentage of residency programs that taught palliative care subjects corresponded to the degree to which these skills might be needed in the specialty area covered by the program. For example, 93 percent of family practice residency programs in the subspecialty of geriatrics reported teaching end-of-life care, while only 10 percent of pathology residency programs in the subspecialty of pediatric pathology reported teaching the subject. However, the percentage of programs that reported teaching end-of-life care was surprising for some specialties for which the need for physicians skilled in end-of-life care seems more evident. For example, nearly half of internal medicine residency programs in the subspecialty of oncology reported not teaching end-of-life care, although physicians treating patients with cancer often deal with terminal patients. (See app. III for a detailed summary of AMA’s 1996 residency program survey results.) The knowledge and skill of resident physicians is evaluated by each residency program’s internal evaluations and national examinations. These examinations include the U.S. Medical Licensing Examination as well as examinations some physicians take to become certified in a medical specialty. The extent to which board examinations include questions related to palliative care has not been quantified, and student performance on palliative care questions that may be included on the exams has not been evaluated. The availability of continuing medical education courses that focus on palliative care issues for terminally or chronically ill people appears limited. Many states and medical associations require physicians to continue their medical education to maintain their medical license or membership benefits, but they generally do not require courses on specific topics such as palliative care. Because of the number and variety of continuing medical education providers, information on the existence of continuing medical education courses dedicated to palliative care issues was not readily available. However, we queried the AMA’s database of over 2,000 accredited continuing medical education activities and found that few specifically addressed palliative care. In addition, an official with the American Osteopathic Association said there are few continuing medical education courses related to palliative care for doctors of osteopathy. An example of a course that specifically addresses palliative care issues is a self-study program developed by the American Academy of Hospice and Palliative Medicine, which covers a variety of palliative care topics. Recognizing a need for more courses in this area, private efforts are under way to develop more conferences on end-of-life care issues as well as promote those that already exist. The fiscal year 1998 conference committee report on HHS appropriations specifies $452,000 for section 781. Officials in HRSA plan to use $150,000 of this amount for seven medical education projects, including one project on palliative care. All seven projects will be conducted by one medical education research center. HRSA plans to provide the funds for the seven projects in May 1998. Because budgets are not maintained separately for each project, HRSA and medical education research center officials were not able to specify the amount of funding dedicated for the palliative care project. The project will assess current medical school courses on death and dying to determine if they meet recommended methods for teaching end-of-life care. The remaining $302,000 will be used to support projects focused on increasing the knowledge about the needs and resources of the nation’s health professions. Information obtained through these projects will be used to assess the effectiveness of current workforce programs. HRSA officials said they consider this research as higher in priority. In addition, the officials said that due to the importance of health workforce research, future funding of palliative care projects in medical education is uncertain. HRSA did not include palliative care research for medical education in its fiscal year 1999 budget justification. HRSA officials do not plan to fund any of the other types of palliative care topics authorized under the Assisted Suicide Funding Restriction Act. They said these other initiatives, such as demonstration projects to reduce restrictions on access to hospice programs, are not related to the traditional focus of title VII to support health professions education and training. Projects of these types are generally administered by HHS agencies other than HRSA. For example, the act authorizes research funding under section 781 for advancing the biomedical knowledge of pain management, which has been primarily the domain of the National Institutes of Health (NIH). The act also authorizes research under section 781 for using specific outcome measures to assess the quality of care for patients with disabilities or terminal or chronic illness; measuring outcomes and quality of care is an area of expertise for HHS’s Agency for Health Care Policy and Research (AHCPR). Several HHS agencies fund projects related to palliative care under their own program authority. Some of these projects directly address the types of research, training, and demonstration projects authorized in the Assisted Suicide Funding Restriction Act, including the following: Research authorized by the act includes projects to advance biomedical knowledge of pain management and assess the quality of care for patients with terminal illness by measuring and reporting specific outcomes. NIH—the federal government’s primary focal point for biomedical research—estimates that in fiscal year 1997, it spent over $82 million on various types of pain management research. NIH also established a pain research consortium to enhance and coordinate pain research across the various components of NIH. NIH’s National Institute of Mental Health has also begun suicide prevention research projects. HHS’ Assistant Secretary for Planning and Evaluation is providing $174,000 to evaluate the quality of hospice care in nursing homes—a topic directly related to this provision. Training authorized by the act includes projects to teach physicians about palliative care issues. HRSA’s HIV/AIDS Bureau is in the process of completing an evaluation of a Canadian instruction module on palliative care and plans to make recommendations on how the module should be modified for use in the United States. AHCPR, which funds projects to improve the effectiveness of health care services, issued guidance in 1994 on management of cancer pain that included discussions and recommendations on palliative therapies used to relieve or ease pain. Demonstrations authorized by the act include projects to fund home health care services, community living arrangements, and attendant care services. The Health Care Financing Administration, which is responsible for administering Medicare and Medicaid, has supported these types of demonstration projects. For example, states can obtain waivers to use Medicaid funds for home health care services, community living arrangements, and attendant care services, which are not normally covered by Medicaid but that are considered necessary to care for and improve the quality of life for medically fragile populations. Other federal projects do not have an explicit objective related to palliative care and suicide prevention but provide opportunity for benefit in this area. For example, AHCPR has many research initiatives that could address improving palliative care for patient populations most prone to suicide. AHCPR and the American Association of Health Plans will provide $7 million over 3 years to assess the quality of care for patients with chronic diseases under varying features of managed care organizations. In addition, AHCPR has initiatives to develop and improve quality of care measures for health care providers and health service delivery, which could include outcomes for palliative care in the future. AHCPR’s Medical Treatment Effectiveness Program—which has traditionally focused on identifying and promoting the most effective treatments to prevent, diagnose, or treat diseases such as cancer, AIDS, or cardiovascular disease—could also incorporate palliative care for these and other terminal or chronic illnesses in future research projects. Private foundations, nonprofit organizations, and professional associations have recognized palliative care as an emerging and important area of medicine and research. As a result, a variety of private initiatives are under way that cover many of the areas of research, training, and demonstration projects described in the act. The two most comprehensive initiatives we identified are Last Acts, funded by the Robert Wood Johnson Foundation, and Project on Death in America, sponsored by the Open Society, a foundation created by philanthropist George Soros. Last Acts aims to raise awareness of the need to improve the care of persons who are dying, improve communication and decisionmaking related to end-of-life care, and change the way health care and health care institutions approach care for dying people. Last Acts has task forces and committees to pursue a variety of issues, including improving provider education on palliative care and developing outcomes and evaluation tools for palliative care. Project on Death in America is a $30 million campaign to transform the culture of dying by supporting projects and fostering change in the provision of end-of-life care, public and professional education, and public policy. It conducts its own projects and provides grants to other individuals and institutions. Its major project is a $7 million faculty scholars program for innovative clinical care, research, and educational programs to improve the care of the dying. Private entities also provide funding for a variety of other projects in palliative care—some with a specific focus in physician education or improving access and quality to palliative care. (See table 2). We provided a draft of this report to the Secretary of HHS for review and comment. Although we did not receive comments in time for publication, HRSA and NIH officials informed us that they generally concurred with the report’s findings. Additionally, NIH officials stated that a conscious effort is needed to change the curricula of health professions education schools to sensitize providers about the needs of the chronically ill and disabled patients. In particular, they emphasized that attention needs to be given to pain management, depression, and symptom management. In addition, officials from HRSA, NIH, AHCPR, and the Office of Public Health and Science provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of HHS, interested congressional committees, and other interested parties. We will also make copies available to others on request. The information contained in this report was developed by Frank Pasquier, Assistant Director; Timothy S. Bushfield; and Lacinda Baumgartner. Please contact me at (202) 512-6543 or Frank Pasquier at (206) 287-4861 if you or your staff have any questions. We discussed the extent that palliative care issues were taught and tested in medical schools, residency programs, and continuing medical education with representatives from cognizant professional associations including the AMA, the American Osteopathic Association, the AAMC, the American Association of Colleges of Osteopathic Medicine; faculty from various educational institutions; representatives from entities administering national examinations necessary for medical licensure and board certification, such as the National Board of Medical Examiners and the American Board of Internal Medicine; representatives from accrediting bodies for medical schools, residency programs, and continuing education, including the Liaison Committee for Medical Education, the Accreditation Council for Graduate Medical Education, and the Accreditation Council for Continuing Medical Education; and recognized experts in the field of palliative medicine. To gather more specific information about the extent to which the palliative care subjects addressed in the Assisted Suicide Funding Restriction Act were taught in medical schools, we developed and administered a survey to all accredited U.S. allopathic and osteopathic medical schools regarding their curriculum, training, and testing of student knowledge in pain management, depression identification and treatment, and palliative care. After reviewing literature on the subject and consulting with experts, we selected seven topics to capture the range of possible instruction. Our topics included both the broad topic of palliative care and more specific topics, such as pain management. While the specific topics are components of palliative care, they do not individually encompass the broader concept of palliative care. For this reason, we asked the schools to report on each topic separately. In conducting our survey of medical schools, we used mailing lists provided by AAMC and the American Association of Colleges of Osteopathic Medicine that they use to conduct annual medical school curriculum surveys. Our response rate was 85 percent. Results are self-reported, and we did not verify or standardize responses among schools. A summary of the survey results is shown in appendix II. Due to the large number of residency programs and our reporting time frames to the Congress, we did not conduct a similar survey of these programs. However, the AMA provided us with related information reported in its annual survey of 7,787 residency programs accredited by the Accreditation Council for Graduate Medical Education and combined specialty residency programs. The survey covers allopathic programs only. Residency programs responding to this survey in 1996 reported whether the general subjects of end-of-life care and suicide were included in their curricula. More detailed data on subjects specifically related to pain management, depression identification and treatment, and palliative care were not available. AMA survey data did not include information on whether residency programs tested student competency in particular subject areas. We discussed HHS’ plans for awarding palliative care grants under section 781 with representatives responsible for administering these grants in the HRSA’s Bureau of Health Professions. We also reviewed HRSA’s plans for funding section 781 projects in HRSA’s 1998 and 1999 Justification of Estimates for Appropriations Committees. We discussed other federal and private palliative care research and education initiatives funded outside section 781 with HHS agencies and private entities involved in similar palliative care activities. HHS agencies or offices we spoke with included AHCPR, NIH, the Health Care Financing Administration, and the Office of the Assistant Secretary for Planning and Evaluation. Private entities we obtained information from regarding ongoing palliative care projects included foundations, such as the Robert Wood Johnson Foundation; nonprofit organizations, such as the Open Society, the United Hospital Fund of New York, and The George Washington University’s Center to Improve Care of the Dying; and professional associations, including AMA’s Ethics Institute, the American Academy of Hospice and Palliative Medicine, and the American Board of Hospice and Palliative Medicine. The federal and private palliative care projects we identified are examples of the various types of projects being conducted; they are not intended to be a comprehensive listing of palliative care projects. We conducted a survey of all medical schools—both allopathic and osteopathic—in the United States. We asked each school about the extent to which their didactic—or classroom—instruction and clinical training addressed palliative care topics. We received responses from 125—or 85 percent—of these schools. Tables II.1 through II.3 summarize the results of this survey. The AMA surveyed 7,787 residency programs in the United States in 1996. We obtained data on the number of programs that included end-of-life care and suicide prevention topics. Table III.1: U.S. Residency Programs Teaching End-of-Life Care and Suicide Prevention Endocrinology, diabetes, and metabolism Pulmonary disease and critical care medicine (continued) Blood banking and transfusion medicine (continued) Internal medicine and emergency medicine Internal medicine, physical medicine, and rehabilitation Internal medicine and preventive medicine Neurology, diagnostic radiology, and neuroradiology Neurology, physical medicine, and rehabilitation Pediatrics and physical medicine and rehabilitation Pediatrics and child and adolescent psychiatry (continued) Subspecialties are indented. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reported on the extent to which projects under section 781 of the Public Health Service Act have furthered the knowledge and practice of palliative care, particularly with regard to curricula offered and used in medical schools. GAO's preliminary work showed that no fiscal year (FY) 1998 funding for section 781 projects would be awarded by its April 30, 1998 reporting date, so GAO focused on determining: (1) the extent to which the physician education and training process currently teaches and tests student competency in palliative care issues; (2) the Department of Health and Human Services' (HHS) plans for funding palliative care projects under section 781; and (3) other federal and private palliative care research and education initiatives. GAO noted that: (1) physicians receive varying amounts of instruction in palliative care topics as they progress through 4 years of medical school and 3 to 8 years of subsequent specialized training in a residence program; (2) each of the seven palliative care areas in GAO's survey was required by 56 percent or more of the 125 medical schools responding to its survey; (3) similarly, about half of the 7,787 specialty and subspecialty residency programs educated students in end-of-life care; (4) GAO's survey showed that many medical schools are interested in providing additional instruction and training in palliative care; (5) about one-third of the schools reported a need to change their curriculum for addressing palliative care for the chronically and terminally ill; (6) close to half reported a need to include more clinical training in managing pain and depression for these patient populations; (7) HHS officials plan to use $150,000 of the $452,000 specified for section 781 in the FY 1998 appropriations conference report to support seven medical education research projects, including one palliative care project; (8) officials from HHS and the medical education research center receiving these funds were not able to specify the amount being spent on the palliative care project because separate budgets are not developed for each project; (9) of the remaining section 781 funds, all $302,000 will be used to support research for improving the distribution and diversity of the health care workforce; (10) because of the higher priority that HHS has assigned to this other research, officials do not plan to use any funds for palliative care research, training, or demonstration projects in 1999; (11) nevertheless, a substantial amount of research related to palliative care is being funded in ways other than through section 781; (12) over the last few years, HHS and private entities have invested tens of millions of dollars into projects similar to those specified in the Assisted Suicide Funding Restriction Act; (13) some HHS agencies have more general projects, not specified in the act, that could also benefit palliative care in the areas of increasing health care access, improving quality of care, and advancing biomedical research; and (14) private foundations and other private organizations have spent millions of dollars to educate and train health care professionals in palliative care and improve the quality of care for the terminally and chronically ill.
The U.S. government is one of the world’s largest property owners, with a real estate portfolio of over 400,000 defense and civilian buildings and over one-half billion acres of land. As we and others have previously reported, federal asset managers are confronted with numerous challenges in managing this multibillion-dollar real estate portfolio, including a large deferred maintenance backlog and obsolete and underutilized properties. These challenges must be addressed in an environment marked by budgetary constraints and growing demands to improve service. In response to this backlog and limited funding for repair and alteration requirements, we have suggested that the Congress consider providing the Administrator of GSA with the authority to experiment with funding alternatives, including public-private partnerships, when they reflect the best economic value available for the federal government. The Congress has already enacted legislation that provides certain agencies with a statutory basis to enter into partnerships. This additional property management tool has been provided to the Department of Veterans Affairs and the Department of Defense. In an effort to provide more agencies with a broader range of property management tools, two bills were introduced, but were not passed, in the 106th Congress that addressed issues of federal property management. The Federal Property Asset Management Reform Act of 2000, S. 2805, would have amended the Federal Property and Administrative Services Act of 1949 to enhance governmentwide property management. Among other provisions, the act would have allowed federal agencies to out-lease underutilized portions of federal real property for 20 to 35 years and retain the proceeds from the transfer or disposition of real property. The Federal Asset Management Improvement Act of 1999, H.R. 3285, provided for the use of (1) partnerships with the private sector to improve and redevelop federal real property, (2) performance measures for federal property management, and (3) proceeds from these partnerships being retained for the improvement of federal real property. Neither of these bills was passed, but their provisions reflect the kinds of actions that could be taken to address the issues surrounding the management of federal real property. The hypothetical public-private partnerships our contractors developed and analyzed for 10 specific GSA properties indicated that partnerships could be a viable management tool. However, more detailed feasibility studies would need to be done before partnerships are undertaken. In addition, we did not compare the benefits of public-private partnerships with other alternatives for addressing problems in federal buildings, such as appropriations for renovations. Such an analysis of all alternatives would need to be performed so that the alternative offering the best economic value for the government could be chosen. OMB staff indicated that where there is a long-term need for the property by the federal government, it is doubtful that a public-private partnership would be more economical than directly appropriating funds for renovation. Public-private partnerships can take on many different forms. The potential benefits of any partnership would be largely defined as the partnership is being formed. The various aspects of the partnership arrangement would be negotiated and agreed upon, such as the terms of the master ground lease, which is the mechanism the federal government would use to lease its property to the partnership, and the redevelopment strategy. Both the private sector and government would share in the distribution of cash flows generated by the property. The hypothetical partnership scenarios developed by our contractors for this study entailed some basic assumptions about the structure of the partnerships but did not detail the specifics of each partnership. For example, the hypothetical partnership scenarios did not guarantee government occupancy of the properties. However, depending on how OMB scores these transactions, some of the scenarios could trigger capital lease-scoring requirements due to the implicit long-term federal need for the space. These issues will need to be further explored before public- private partnerships are created. The redevelopment strategies developed for each property ranged from repairing and modernizing the existing building to demolishing the existing building and increasing the amount of office space by rebuilding multiple buildings on the same site. According to our consultants, the analysis of the partnerships for many of these properties showed a sufficient potential financial return to attract private sector interest in a partnership arrangement. Multiple potential benefits to the federal government of public-private partnerships were also identified. These potential benefits include the utilization of the untapped value of real property, conversion of buildings that are currently a net cost to GSA into net attainment of efficient and repaired federal space, reduction of costs incurred in functionally inefficient buildings, protection of public interests in historic properties, and creation of financial returns for the government. When deciding whether to enter into a partnership, the government will need to weigh the expected financial return and other potential benefits against the expected costs, including potential tax consequences, associated with the partnership. Any cost associated with vacating buildings for the renovation work to be done would also have to be considered in any alternative that is evaluated. For a public-private partnership to be a viable option, there must be interest from the private sector in partnering with the government on a selected property. The potential private sector partner’s return from the partnership is a critical factor in its decision on whether to partner with the federal government. According to our contractors, about a 15-percent IRR would likely elicit strong interest from the private sector in a partnership. However, this is only one factor, and the circumstances and conditions of each partnership are unique and would have to be evaluated on a case-by-case basis by both the private sector and the federal government. For example, a somewhat lower IRR could be attractive if other conditions, such as the risk level, are favorable. In addition, when our contractors discussed possible partnership scenarios with local developers, the developers said that to participate, they would want at least a 50-year master ground lease. The slides in appendix I, containing detailed information on the properties, show that the longer lease period would allow for the private sector to maximize its financial return from the partnership. Our contractors determined that 8 of the 10 GSA properties in our study were strong to moderate candidates for public-private partnerships. This determination was based on the (1) estimated IRR for the private sector partner in year 10 of the project, which ranged from 13.7 to 17.7 percent; (2) level of federal demand for the space; and (3) level of nonfederal demand for space. The level of demand for space, both federal and nonfederal, affects the level of risk that the space will be vacant and thus non-income-producing. The stronger the local market is for rental space, the more likely the space will be rented and thus be income-producing for the partnership. The properties that were strong candidates for partnerships were located in areas with a strong federal and nonfederal demand for space; and many had untapped value that the partnership could utilize, such as excess land on which a new or expanded building could be built. Public-private partnerships were not viable for 2 of the 10 GSA properties in our study. This was primarily due to a weak nonfederal demand for space and low financial potential. These properties had estimated potential IRRs of 12.4 and 10.3 percent. In addition to the relatively low IRRs, neither property had the potential of increasing the amount of rentable space available to increase the earning potential of the property, and both were in markets that had vacant office space with little or no demand for new office space. Many factors can affect the viability of a partnership arrangement. In addition to the local federal and nonfederal demand for space, the actual cost of redevelopment of a property to meet federal needs can greatly affect the viability of a partnership arrangement. The higher the cost of renovation, the longer it will take the partnership to recoup its costs and make a profit, thus affecting the appeal of the partnership to the private sector. In GSA’s inventory, numerous buildings either have or are at risk of having a negative net cash flow due to their deteriorating condition. Four of the 10 buildings in our study are either vacant or were expected to be vacant by 2002, with little prospect of recruiting other agencies to fill the space because of the condition of the buildings. In addition, two of the other six buildings we studied were at risk of losing their current tenants because of the condition of the buildings. If public-private partnership authority becomes available, decisionmakers and policymakers will need to consider such issues as budget score- keeping rules, the type of facilities that would be appropriate for a partnership arrangement, and congressional review and oversight. In addition, each property is unique and will thus have unique issues that will need to be negotiated and addressed as the partnership is formed. Great care will need to be taken in structuring partnerships to protect the interests of both the federal government and the private sector. Our study designed a conceptual framework for public-private partnerships in order to identify potential benefits of these partnerships. Our study did not identify or address all the issues of partnerships that will need to be considered by the decisionmakers and policymakers as partnerships are developed. Action is needed to fix buildings that are in disrepair and have a negative net cash flow due to their deteriorating condition. As a result of the analysis done by our contractors, it appears that allowing GSA and other property-holding agencies to enter into public-private partnerships may enable them to deal with some of their deteriorating buildings. Partnerships could even provide other financial benefits to the federal government, such as reduced operating expenses and increased income that could be used for renovating other federal buildings. The potential benefits of public-private partnerships do not diminish the need for GSA to pursue and consider other alternatives for addressing problems in deteriorating federal buildings, such as federal financing through appropriations or the sale or exchange of property. Regardless of whether public-private partnership authority is provided, the problems with these buildings need to be addressed. We recommend that the Administrator of GSA use all available strategies to address the problems of buildings in GSA’s inventory that have or are at risk of having a negative cash flow as a result of their deteriorating condition. We also recommend that the Administrator of GSA seek statutory authority to establish a pilot program that would demonstrate the actual benefits that may be achieved from public-private partnerships that achieve the best economic value for the government. The Congress should consider providing the Administrator of GSA with the authority to proceed with a pilot program to demonstrate the actual benefits that may be achieved using public-private partnerships that achieve the best economic value for the government as a real property management tool. If such authority is granted, the Congress should consider allowing GSA to enter into master ground leases of sufficient length to attract private sector interest in participating in partnerships with the federal government. Our study found that a 50-year master ground lease was generally sufficient to attract private sector interest. As we stated in April 2001, Congress should also consider allowing agencies to retain the funds from real property transactions. If such authority is granted, Congress should continue its appropriation control and oversight over the use of any funds retained by agencies. On June 28, 2001, we received written comments on this report from GSA’s Commissioner for the Public Buildings Service. He agreed with the findings and recommendations in our report and noted a range of property management tools that GSA is currently using to address the physical conditions of its real property inventory. These comments are reprinted in appendix II. GSA officials also provided technical comments, which have been incorporated as appropriate. As suggested in your request letter and discussed with your offices, we hired contractors to develop and analyze hypothetical partnership scenarios for 10 selected GSA buildings to identify the potential benefits to the federal government and private sector of allowing federal agencies to enter into public-private partnerships. GSA’s National Capital Region had previously contracted for a study to analyze the financial viability of public-private partnership ventures for three buildings in Washington, D.C. As agreed with your offices, because the majority of the work for these properties had already been done, we had the contractor update its work on these 3 buildings and selected them as 3 of the 10 GSA properties. To help us select the other 7 properties for our study, GSA provided a list of 36 properties that it considered good candidates for public-private partnerships. In preparing this list of properties, GSA officials said that they considered factors such as the strength of the real estate market in each area, the extent to which the property was currently utilized or had land that could be utilized, and the likelihood of receiving appropriations to rehabilitate the property in the near future. We judgmentally selected seven properties from this list to include properties (1) from different geographic areas of the country, (2) of different types and sizes, and (3) with historic and nonhistoric features. To analyze the potential viability of public-private partnerships for each of the 10 selected GSA properties, the contractors did the following: analyzed the local real estate markets, created a hypothetical partnership scenario and redevelopment plan, and constructed a cash flow model. In the contractor’s judgement, the partnership scenarios were structured to meet current budget-scoring rules and provisions in H.R. 3285. These provisions included the requirements that the property must be available for lease in whole, or in part, by federal agreements do not guarantee occupancy by the federal government; government will not be liable for any actions, debts, or liabilities of any person under an agreement; and leasehold interests of the federal government are senior to those of any lender of the nongovernmental partner. However, a determination on how the partnerships would be treated for budget-scoring purposes would have to be made after more details are available on the partnerships. We accompanied the contractor on the visits to the seven GSA properties that had not been previously studied. We interviewed or participated in discussions with developers and local officials in the areas where the properties were located and officials from GSA. We reviewed the contractors’ work on the 10 properties for reasonableness but did not verify the data used by the contractors. The partnership viability scenarios developed for this assignment are hypothetical, based on information that was made readily available by representatives of the local real estate markets, city governments, and GSA. Any actual partnerships involving these properties may be very different from these scenarios. In-depth feasibility studies must be done to evaluate partnership opportunities before they are pursued. There may be other benefits and costs that would need to be considered, such as the possible federal tax consequences and the costs of vacating property during renovation in some cases. This study only looked at the potential benefits to the federal government and private sector of public-private partnerships as a management tool to address problems in deteriorating federal buildings. We did not evaluate the potential benefits of other management tools that may be available for this purpose. We did, however, discuss the implications of using public- private partnerships with OMB representatives. We did our work between November 2000 and June 2001 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Chairmen and Ranking Minority Members of Committees with jurisdiction over GSA, the Director of OMB, and the Administrator of GSA. We will make copies available to others upon request. Major contributors to this report include Ron King, Maria Edelstein, and Lisa Wright-Solomon. If you or your staff have any questions, please contact me or Ron King on (202) 512-8387 or at ungarb@gao.gov or kingr@gao.gov. The partnership viability scenarios developed for this assignment are hypothetical based on information readily available from people in the local real estate markets, city officials, and GSA. Any actual partnerships involving these properties may be very different from these scenarios. In- depth feasibility studies must be done to evaluate the partnership opportunities before they are undertaken. Cash flow Preferred return (to the private partner) Jacksonville, Columbia) Demolition of existing building and rebuild like building (Andover, Charleston) Repair/modernize existing building and construct new building on excess land (Portland) Construct new building on underutilized land and outlease existing buildings and property (Seattle) Repair/modernize existing building and construct new space (GSA HQ, FOB 9) (50 Year Master Lease) (50 Year Master Lease) FDA is scheduled to vacate the building in 2001 and return it to GSA, free of FDA-generated hazardous materials, in 2002 Very desirable location, proximity to the Capitol, Smithsonian, and the Mall Completely renovate the building to greatly update and functionally improve the space Recapture existing laboratory space as office and add an additional 150 parking spaces to the existing 50 in the basement level. (50 Year Master Lease) Master Lease Term Comparison Net (50 Year Master Lease) Master Lease Term Comparison Net (50 Year Master Lease) Master Lease Term Comparison Net (50 Year Master Lease) Building Owners and Managers Association International, a trade association of the office building industry, that developed a standard method of floor measurement in square feet for commercial real property. Net operating income minus master ground lease, debt service, and replacement reserve. A designated downtown section of a city, generally consisting of retail, office, hotel, entertainment, and government land uses with some high- density housing. Amount required for payments of interest and principal (often insurance and tax escrows, too) on money owed. Percentage rate used in discounting cash flows in calculations of net present value. The process of estimating the budgetary effects of pending and enacted legislation and comparing them to limits set in the budget resolution or legislation. Scorekeeping tracks data such as budget authority, receipts, outlays, and the surplus or deficit. Total enclosed floor area of a building measured in square feet. A lease for the use and occupancy of land only for a period of time. The rate of return charged by a lender for the use of funds, expressed in the form of a percentage per year. The present value interest rate received for an investment consisting of payments and income that occur at regular periods; measures the return, expressed as an interest rate, that an investor would earn on an investment. A written agreement between the property owner and a tenant (lessor) that stipulates the conditions under which the tenant (lessee) is entitled to use the property (in this case, real property) in return for periodic payments (rent) for a specified period of time. A controlling lease under which all other interests in the real property are subordinate; for example, if a master lease is for a 5-year term, a sublease cannot legally exceed 5 years. Cash flow minus preferred return to the private partner. Operating income minus operating expenses. Method of converting a cash flow stream over a number of years into the value of that money today, using an appropriate discount rate, in order to make investment decisions. Broad term used to describe the expenses incurred in ordinary recurring activities of a property as opposed to nonrecurring items. Earnings from normal operations that do not take into account proceeds from nonrecurring items. A distribution of income to the private partner prior to the distribution of net cash flow in accordance with the terms of the partnership, generally to compensate the private partner for its cost of capital and risk incurred. Value today (or at some specific date) of an amount to be paid or received later. An arrangement by which the federal government contributes real property and a private entity contributes financial capital and borrowing ability to redevelop or renovate real property to serve, in part or in whole, a public need. A term used in the commercial real estate market that includes occupiable square feet plus the tenants’ proportional share of common building areas, such as rest rooms, exit stairways/fire corridors, and lobbies. Amount set aside from net operating income to pay for renovation or replacement of short-lived assets. Unit of area measurement equal to a square measuring one foot on each side. An arrangement whereby a lessee leases the property to a different end user while the lessor maintains ownership and the lessee retains all of its obligations under the lease; terms cannot exceed that of a master lease.
The U.S. government is one of the world's largest property owners, with a real estate portfolio of more than 400,000 defense and civilian buildings and more than one-half billion acres of land. Each year, the federal government spends billions of dollars to maintain its buildings. Even so, the General Services Administration (GSA) contends that it needs $4 billion, over and above these expenditures, to maintain its existing inventory. This report identifies the potential benefits to the federal government of entering into public-private partnerships on real property--an arrangement in which the federal government contributes real property and a private entity contributes financial capital and borrowing ability to redevelop or renovate the real property. GAO found that public-private partnership authority could be an important management tool to address problems in deteriorating federal buildings, but further study of how the tool would actually work and its benefits compared to other options is needed. Potential net benefits to the federal government of entering into these public-private partnerships include better space, lower operating costs, and increased revenue without up-front federal capital expenditures if further analysis shows that they would not be treated as capital leases for budget-scoring purposes. The potential benefits of public-private partnerships do not diminish the need for GSA to pursue other alternatives for addressing problems in deteriorating federal buildings. GAO summarized this report in testimony before Congress; see Public-Private Partnerships: Factors to Consider When Deliberating Governmental Use as a Real Property Management Tool, by Bernard L. Ungar, Director for Physical Infrastructure Issues, before the Subcommittee on Technology and Procurement Policy, House Committee on Government Reform. GAO-02-46T , October 1 (11 pages).
EPA administers and oversees grants primarily through the Office of Grants and Debarment, 10 program offices in headquarters, and program offices and grants management offices in EPA’s 10 regional offices. Figure 1 shows the key EPA offices involved in grants activities for headquarters and regions. The management of EPA’s grants program is a cooperative effort involving the Office of Administration and Resources Management’s Office of Grants and Debarment, program offices in headquarters, and grants management offices in the regions. The Office of Grants and Debarment develops grant policy and guidance. It also carries out certain types of administrative and financial functions for the grants approved by headquarters program offices, such as awarding grants and overseeing the financial management of grants. On the programmatic side, headquarters program offices establish and implement national policies for their grants programs and set funding priorities. They are also responsible for the technical and programmatic oversight of their grants. In the regions, grants management offices carry out certain administrative and financial functions for the grants, such as awarding grants approved by the regional program offices, while the regional program staff provide technical and programmatic oversight of their grantees. As of June 2004, 134 grants specialists in the Office of Grants and Debarment and the regional grants management offices were largely responsible for administrative and financial grant functions. Furthermore, 2,089 project officers were actively managing grants in headquarters and regional program offices. These project officers are responsible for the technical and programmatic management of grants. Unlike grant specialists, however, project officers generally have other responsibilities, such as using the scientific and technical expertise for which they were hired. In fiscal year 2003, EPA took 6,753 grant actions involving funding totaling about $4.2 billion. These awards were made to six main categories of recipients, as shown in figure 2. EPA offers two types of grants—nondiscretionary and discretionary: Nondiscretionary grants support water infrastructure projects, such as the drinking water and clean water state revolving fund programs, and continuing environmental programs, such as the Clean Air Program for monitoring and enforcing Clean Air Act regulations. For these grants, Congress directs awards to one or more classes of prospective recipients who meet specific eligibility criteria; the grants are often awarded on the basis of formulas prescribed by law or agency regulation. In fiscal year 2003, EPA awarded about $3.6 billion in nondiscretionary grants. EPA has awarded these grants primarily to states or other governmental entities. Discretionary grants fund a variety of activities, such as environmental research and training. EPA has the discretion to independently determine the recipients and funding levels for these grants. In fiscal year 2003, EPA awarded $656 million in discretionary grants. EPA has awarded these grants primarily to state and local governments, nonprofit organizations, universities, and Native American tribes. To highlight persistent problems and, it is hoped, to focus greater attention on their resolution, we designated EPA’s grants management, including achieving environmental results, as a major management challenge in our January 2003 performance and accountability report. In August 2003, we further addressed the question of environmental results. We reported that EPA (1) had awarded some grants before considering how the results of the grantees’ work would contribute to achieving environment results; (2) had not developed environmental measures and outcomes for its grants programs; and (3) often did not require grantees to submit workplans that explain how a project will achieve measurable environmental results. We also found that EPA’s monitoring efforts had not called for project officers to ask grantees about their progress in using measures to achieve environmental outcomes. For its grants programs, EPA is still not effectively linking grants to environmental results. The problems we identified in our previous 2003 report continue. Further, in our recent report, in 2004, we identified an additional problem. That is, we could not determine from EPA’s databases the types of goods and services provided by grants. To identify goods and services obtained from discretionary grants, we surveyed discretionary grant recipients. On the basis of our survey responses, we identified a total of eight categories (see table 1). We estimated that of all the goods and services indicated by grant recipients, 59 percent were in three of these categories: (1) research and development; (2) training, workshops, and education; and (3) journals, publications, and reports. While we were able to identify goods and services from survey responses, we could not link them to results. We reviewed the files of 67 grantees to identify if there was any link between goods and services and program measures or outcomes in grant workplans. We found that none of the 67 grants identified measures and only 9 of the 67 grants identified anticipated outcomes in their workplans. EPA has also found that grantee workplans often do not identify environmental outcomes. In 2003, EPA began conducting internal reviews that—for the first time—quantified the extent to which its grant-issuing offices, including program and regional offices, ensured that environmental outcomes are identified in grant workplans. EPA reported that, overall, less than one-third of the 93 grant workplans reviewed identified environmental outcomes. (See table 2.) Among EPA’s offices, the percent of workplans that identify environmental outcomes ranged from 0 to 50. In 2004, EPA plans to review seven other offices. As of July 2004, EPA had completed reviews of three offices. Among these three offices, EPA found environmental outcomes in a little less than half of grant workplans. Final agencywide data will not be available until the end of 2004, when EPA completes its internal reviews. Not surprisingly, given the lack of outcomes in the workplans, OMB found that EPA grant programs are not demonstrating results. In February 2004, OMB found that 8 of the 10 EPA grant programs it reviewed were “not demonstrating results.” These programs total about $2.8 billion. (See table 3.) OMB rated the two remaining grant programs—Brownfields and Tribal Assistance Programs—totaling $224 million as “adequate” in demonstrating results. According to EPA’s Inspector General, EPA’s failure to consistently identify environmental measures and outcomes can weaken grant oversight. For example, the Inspector General recently reported that EPA Region 6 could not determine whether its oversight of water, hazardous waste, and air programs in Louisiana was effective because, in part, Region 6 had not linked these programs to environmental outcomes. Region 6 had focused only on program outputs; it therefore could not determine whether it was using its resources wisely and achieving program results. EPA’s program and regional grants officials have identified difficulties in measuring and achieving environmental outcomes. For example: In response to EPA’s internal reviews, Region 9 officials noted that it is costly and difficult to measure outcomes when there is a substantial time lag between implementing the grant and achieving environmental outcomes. Moreover, it is difficult to attribute environmental outcomes to one specific grant when dealing with complex ecosystems. In addition, Office of Environmental Information project officers stated that environmental outcome requirements should not apply to support functions like information management. Responding to the recent Inspector General report faulting Region 6 for its oversight of Louisiana’s environmental programs, Region 6 officials indicated that they had been unfairly criticized for not implementing environmental measures since the agency, as a whole, had been unable to do so. These concerns demonstrate the need for guidance that addresses the complexities of measuring and achieving environmental results. Furthermore, not every EPA program office has yet developed environmental measures for their grant programs. For example, in June 2004, the Inspector General found that EPA has been working on developing environmental measures for the Clean Water State Revolving Fund program since 1998. However, EPA has not yet developed these measures or a comprehensive plan on how it plans to develop these measures, although it plans to develop these measures by February 2005. In 2003, we reported that EPA’s new 5-year grants management plan was promising. In the plan, EPA had established the goal of “identifying and achieving environmental outcomes” with the objectives and associated milestones shown in table 4. As table 4 shows, EPA’s progress in implementing the plan’s environmental outcomes objectives is behind schedule. EPA plans to issue its environmental outcomes policy—a key objective originally scheduled for 2003—in fall 2004, but the policy will not become effective until January 2005. EPA officials stated that the policy was delayed because of the difficulty in addressing environmental outcomes. Furthermore, as a result of this delay, EPA has delayed meeting the objectives of developing a tutorial for grantees, requiring outcomes in solicitations, and incorporating success on achieving outcomes into the criteria for awarding grants—objectives that are contingent on the issuance of the policy. EPA is also delaying the objective of incorporating grantee’s previous success in identifying outcomes into the criteria for awarding new grants in order to give grantees a year to understand the new policy. In the absence of a final outcomes policy, EPA issued an interim policy in January 2004. The interim policy is a positive step in that for the first time EPA is requiring project officers to identify—at the pre-award stage—how proposed grants contribute to achieving the agency’s strategic goals under the Government Performance and Results Act of 1993 (GPRA). (See fig. 3, example 1.) As we reported, project officers were linking the grant to the agency’s goal after the award decision, so that the linkage was a recordkeeping activity rather than a strategic decision. While the interim policy is a positive first step, it does not require project officers to link grant funding to environmental outcomes. Instead, it “encourages” project officers to link grant funding to outputs, outcomes, and performance goals, as illustrated in figure 3, example 2. EPA officials explained that the interim policy did not require the full strategic plan/GPRA “architecture”—goals, objectives, subobjectives, program/project, outputs, outcomes, and annual performance goals— because not all EPA staff are trained on how to implement the strategic plan/GPRA architecture. However, when EPA’s outcome policy becomes effective, it will require every grant workplan to address the full strategic plan/GPRA architecture, including outcomes. Finally, EPA will not meet the grant management’s plan first-year (2004) target for the performance measure of the environmental outcomes goal— the percentage of grant workplans, decision memoranda, and terms of conditions that discuss how grantees plan to measure and report on environmental outcomes. For this performance measure, using 2003 as its baseline year, EPA determined that, as previously discussed, less than one- third of its grant workplans had environmental outcomes. EPA established targets that progressively increase from this baseline to 70 percent in 2004, to 80 percent in 2005, to 100 percent in 2006. EPA officials do not expect that EPA will meet its target for 2004 because its outcome policy is not yet in place. EPA has drafted a policy and guidance on environmental outcomes in grants. As drafted, this policy appears to have EPA moving in the right direction for addressing environmental outcomes. The policy Is binding on managers and staff throughout the agency, according to EPA officials. Previously, the Office of Grants and Debarment targeted only project officers through brief guidance on outcomes in their training manual. Emphasizes environmental results throughout the grant life cycle— awards, monitoring, and reporting. In terms of awards, the draft policy applies to both competitive and noncompetitive grants. For example, program offices and their managers must assure that competitive funding announcements discuss expected outputs and outcomes. In terms of grant monitoring, the policy requires program offices to assure that grantees submit interim and final grantee reports that address outcomes. Requires that grants are both aligned with the agency’s strategic goals and linked to environmental results. Specifically, the draft policy requires that EPA program offices (1) ensure that each grant funding package includes a description of the EPA strategic goals and objectives the grant is intended to address and (2) provide assurance that the grant workplan contains well-defined outputs, and to the “maximum extent practicable,” well-defined outcome measures. According to an EPA official, while the policy requires that program offices assure that there are well-defined outputs and outcomes, the grant funding package—an internal EPA document—will not identify each output and anticipated outcome. EPA is concerned that certain types of grants have too many outputs and outcomes to enumerate. Potential grant recipients also will not be required to submit workplans that mirror the strategic plan/GPRA architecture, owing to EPA’s concern that such a requirement would cause the grant to be for EPA’s benefit, and thus, more like a contract. EPA included the provision to “the maximum extent practicable” because it recognized that some types of grants do not directly result in environmental outcomes. For example, EPA might fund a research grant to improve the science of pollution control, but the grant would not directly result in an environmental or public health benefit. EPA’s forthcoming policy and guidance faces implementation challenges. First, while the guidance recognizes some of the known complexities of measuring outcomes, it does not yet provide staff with information on how to address them. For example, it does not address how recipients will demonstrate outcomes when there is a long time lag before results become apparent. Second, although the policy is to become effective in January 2005, all staff will not be trained by that time. EPA has planned some training before issuing the policy and has issued a long-term training plan that maps out further enhancements for training grant specialists and project officers on environmental results. Finally, EPA has not yet determined how environmental results from its programs will be reported in the aggregate at the agency level. EPA’s forthcoming order establishes that program offices must report on “significant results” from completed grants through existing reporting processes and systems, which each program has developed. EPA plans to convene an agencywide work group in fiscal year 2005 to identify ways to better integrate those systems. In conclusion, we believe that if fully implemented, EPA’s forthcoming outcome policy should help the agency and the Congress ensure that grant funding is linked to EPA’s strategic plan and to anticipated environmental and public health outcomes. We believe that the major challenge to meeting EPA’s goal of identifying and achieving outcomes continues to be in implementation throughout the agency. Realistically, EPA has a long road ahead in ensuring that its workforce is fully trained to implement the forthcoming policy and in educating thousands of potential grantees about the complexities of identifying and achieving environmental results. Given EPA’s uneven performance in addressing its grants management problems to this point, congressional oversight is important to ensuring that EPA’s Administrator, managers, and staff implement its grants management plan, including the critical goal of identifying and achieving environmental results from the agency’s $4 billion annual investment in grants. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information, please contact John B. Stephenson at (202) 512- 3841. Individuals making key contributions to this testimony were Avrum I. Ashery, Andrea W. Brown, Tim Minelli, Carol Herrnstadt Shulman, Rebecca Shea, Bruce Skud, and Amy Webbink. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Environmental Protection Agency (EPA) has faced persistent challenges in managing its grants, which constitute over one-half of the agency's budget, or about $4 billion annually. These challenges include achieving and measuring environmental results from grant funding. It is easier to measure grant activities (outputs) than the environmental results of those activities (outcomes), which may occur years after the grant was completed. In 2003, EPA issued a 5-year strategic plan for managing grants that set out goals, including identifying and achieving environmental outcomes. This testimony describes persistent problems EPA has faced in addressing grants' environmental results and the extent to which EPA has made progress in addressing problems in achieving environmental results from its grants. It summarizes and updates two reports GAO issued on EPA's grant management in August 2003 and March 2004. EPA's problems in identifying and achieving environmental results from its grants persist. The agency is still not consistently ensuring that grants awarded are clearly linked to environmental outcomes in grant workplans, according to GAO's analysis and EPA's internal reviews. For example, EPA's 2003 internal reviews found that less than one-third of grant workplans reviewed--the document that lays out how the grantee will use the funding--identified anticipated environmental outcomes. Not surprisingly, given the lack of outcomes in grant workplans, the Office of Management and Budget's recent review of 10 EPA grant programs found that 8 of the grant programs reviewed were not demonstrating results. Furthermore, not every EPA program office has yet developed environmental measures for their grant programs. EPA's progress in addressing problems in achieving environmental results from grants to this point has been slower and more limited than planned. While EPA had planned to issue an outcome policy--a critical ingredient to progress on this front--in 2003, the policy's issuance has been delayed to the fall of 2004, and will not become effective until January 2005. In the meantime, EPA has issued a limited, interim policy that requires program offices to link grants to EPA's strategic goals, but does not link grants to environmental outcomes. Furthermore, as a result of the delay in issuing an outcome policy, EPA officials do not expect to meet the 5-year plan's first-year target for the goal's performance measure. The forthcoming draft policy we reviewed appears to be moving EPA in the right direction for addressing environmental outcomes from its grants. For example, the draft policy emphasizes environmental results throughout the grant life cycle--awards, monitoring, and reporting. Consistent and effective implementation of the policy will, however, be a major challenge. Successful implementation will require extensive training of agency personnel and broad based education of literally thousands of grantees.
At the end of July 2010, over 10,000 military medical personnel were deployed to Iraq and Afghanistan, 70 percent of whom were Army servicemembers. Of that number, about 4,000 medical personnel were in Iraq and about 6,000 were in Afghanistan. The United States’ military presence in Iraq is scheduled to end no later than December 31, 2011, and, according to administration estimates, as of September 2010, about 104,000 U.S. military personnel were deployed in Afghanistan. Figure 1 shows the breakdown of all military medical personnel in Iraq and Afghanistan by service at the end of July 2010. DOD has established five levels of medical care to treat injured or sick military personnel, extending from the forward edge of the battle area to the continental United States, with each level providing progressively more intensive treatment. Over the course of operations in Iraq and Afghanistan, the military has integrated more advanced medical care into the first three levels of care, which are typically provided in theater, in order to provide the most comprehensive care possible closest to the point of injury. Figure 2 illustrates the different levels of medical care that may be provided to U.S. servicemembers who become ill or injured while in theater. Level 1 – First responder care. This level provides immediate medical care and stabilization in preparation for evacuation to the next level, and treatment of common acute minor illnesses. Care can be provided by the wounded soldiers, medics or corpsmen, or battalion aid stations. Level 2 – Forward resuscitative care. This level provides advanced emergency medical treatment as close to the point of injury as possible to attain stabilization of the patient. In addition, it can provide postsurgical inpatient services, such as critical care nursing and temporary holding. Examples of level 2 units include forward surgical teams, shock trauma platoons, area support medical companies, and combat stress control units. Level 3 – Theater hospital care. This level provides the most advanced medical care available in Iraq and Afghanistan. Level 3 facilities provide significant preventative and curative health care. Examples include Army combat support hospitals, Air Force theater hospitals, and Navy expeditionary medical facilities. Level 4 – Overseas definitive care. This level provides the full range of preventative, curative, acute, convalescent, restorative and rehabilitative care, most typically outside of the operational area. An example of a level 4 facility is Landstuhl Regional Medical Center in Germany. Level 5 – U.S. definitive care. This level provides the same level of care as a level 4 facility, but most typically is located in the continental United States. Examples include Walter Reed Army Medical Center in Washington, D.C.; National Naval Medical Center in Bethesda, Maryland; and Brooke Army Medical Center at Fort Sam Houston, Texas. Not all patients progress through all five levels of care, and patients being evacuated may skip one or more levels of care as appropriate. In addition, joint and service definitions for each level of care vary marginally due to service-specific support requirements, but they essentially align with one another. For purposes of this report, we focused primarily on level 2 and level 3 facilities and their personnel, which provide the most comprehensive and advanced medical care in Iraq and Afghanistan. The U.S. command structure in Iraq and Afghanistan has evolved over time. In 2009, the designation of U.S. troops in Afghanistan became United States Forces-Afghanistan. In 2010, the designation of U.S. troops in Iraq became United States Forces-Iraq. The commanding generals of United States Forces-Iraq and United States Forces-Afghanistan both are advised by a lead surgeon on medical policy and procedures, according to theater medical officials. Each theater also has a medical task force—the Task Force 1st Medical Brigade and its successor, the Task Force 807th Medical Brigade in Iraq and the Task Force 30th Medical Command and its successor, Task Force 62nd Medical Command in Afghanistan—that, according to theater medical officials, consist of professional staff members who coordinate care in theater and directly command medical- only units in theater, such as forward surgical teams and combat support hospitals. The theater surgeon and medical task forces command mostly Army medical facilities. According to a DOD official, the other services maintain and operate additional medical facilities in theater that may be outside the direct command of the medical task force but under the direction of United States Forces-Iraq and United States Forces- Afghanistan. For example, the Air Force operates a theater hospital in Balad, Iraq but coordinates closely with the task force medical brigade in Iraq. The United States Forces-Iraq Surgeon and staff collaborate closely with the task force medical brigade commander and staff in Iraq to coordinate medical policy and care. The positions of United States Forces-Afghanistan Surgeon and the commander of the task force medical command in Afghanistan are filled by the same individual. According to DOD officials, DOD meets theater medical personnel requirements through its Global Force Management process. DOD designed the Global Force Management process to provide insight into the availability of U.S. military forces to deploy, including medical personnel. Figure 3 depicts the process and the key participants in Global Force Management. Once the Secretary of Defense designates a service to meet a medical requirement, that service identifies and selects units and personnel to fill the requirement. While the procedures and systems used by each service to select medical personnel vary, the services’ processes for filling requirements all result in units and personnel deploying to an operational theater to carry out a mission. Identifying and selecting medical personnel and units to fill requirements can often be challenging due to shortages of medical personnel, but DOD officials told us they have been able to fill almost all medical personnel requirements since the Global Force Management process was established in 2005. More information on the Global Force Management process and the services’ personnel filling processes can be found in appendix II. Medical officials in theater continually assess the number and the types of military medical personnel they need to support ongoing contingency operations in Iraq and Afghanistan. Theater officials also analyze gaps in medical care and the associated risks given different potential scenarios. However, it is unclear what level of care deployed DOD civilian employees can expect in theater because a DOD directive governing medical care for DOD deployed civilians is inconsistent with in-theater guidance with regard to eligibility for routine medical care for deployed DOD civilian employees. In response to congressional interest about deployed civilians, the Secretary of Defense reported to Congress in April 2010 that with each new mission, the need for new civilian skills have resulted in an increase in the number of deployed civilians and that these civilians are not immune to the dangers associated with contingency operations. Although we did not learn of any DOD deployed civilians turned away for care in theater during the period of our review, officials in theater did say this could be a concern if the number of civilians increased, and at that time they would assess the impact of a civilian increase on the need for more medical personnel. At the conclusion of our audit, an Army official agreed that if there is an inconsistency between departmental guidance and theater guidance, it should be clarified. Thus, by examining inconsistencies in departmental guidance compared to theater guidance on the level of routine medical care, DOD could reduce the uncertainty about the level of routine care these deployed civilians can expect in theater. In response to a draft of this report, DOD mentioned to us that its operating units have sufficient organic medical support and the medical needs of deployed civilians are being met. DOD also agreed that the Commander of U.S. Central Command should revise its guidance to clarify the level of care that deployed civilians should receive. Theater operational and medical officials determine how many and the types of medical personnel needed to support operations in Iraq and Afghanistan through an ongoing assessment, which includes an evaluation of the operational mission and other planning factors, such as historical injury statistics and medical workload data. In their assessment, theater officials also analyze gaps in medical care given different potential scenarios and the associated risks. This ongoing assessment takes place in theater and allows theater officials to identify new medical personnel requirements and regularly reevaluate existing medical personnel requirements. Further, theater operational and medical officials also consider operational limitations when developing their medical personnel requirements, including the limit on the total number of forces in theater and shortages of and high demand for certain medical personnel. In determining the number of military medical personnel and the medical specialties needed, theater operational and medical officials told us that they begin by evaluating various mission planning factors, such as the number and dispersion of U.S. forces, the expected intensity of combat, capabilities of the adversary to inflict harm, geography, and climate. Officials said that this information allows them to determine the level and structure of medical care they expect to need to support missions throughout the theater of operations. For example, in planning for the increase of U.S. forces in Afghanistan beginning in early 2010, officials with the U.S. Central Command requested additional medical personnel to provide medical care to the increased number of U.S. military personnel in theater, including a theater hospital and a preventative medicine unit. In addition, during the offensive in Bastion, Afghanistan, officials with the Task Force 30th Medical Command told us that they relocated some mental health providers in Afghanistan to Bastion for the duration of the heightened operational tempo so this type of care could be better provided in the area experiencing hostilities. To further assess the need for specific types of medical specialists in a given unit and across the theater, medical officials analyze data from the Joint Theater Trauma Registry, the Joint Medical Work Station, and service and joint data on disease and non-battle injuries to determine trends in medical workload. Officials use this information to increase or decrease the number of medical personnel in line with demand for medical services. For example, DOD medical officials conducted an analysis to determine the need for cardiovascular specialists in Iraq and Afghanistan based on, among other variables, the volume of cardiovascular-related medical evacuations in theater. Officials also analyze gaps and risks in the medical care structure under different possible scenarios. For example, the Task Force 1st Medical Brigade in Iraq conducted an analysis that identified possible requirements for additional medical personnel with certain specialties, such as general surgeons, at locations in northern Iraq given the possibility of adverse weather conditions that would prohibit medical evacuation of patients to more advanced medical care facilities. Further, when confronted with a need for additional medical personnel, the theater commanding general can submit a request for forces through DOD’s Global Force Management process. For example, we learned of two Army sustainment brigades—the 82nd and the 43rd Regional Support Commands—that deployed to Afghanistan with their authorized medical personnel but did not have enough medical personnel to provide full support to their convoys and forward locations. In response, Task Force 62nd Medical Command in Afghanistan requested additional forces for these two brigades. Officials told us that DOD met this requirement by deploying 22 Air Force medics to Afghanistan. Additionally, medical officials in Iraq and Afghanistan told us that they must consider two operational limitations which affect how many medical personnel they formally request. First, the cap on the total number of U.S. forces allowed in Iraq and Afghanistan requires theater commanders to balance the number of medical personnel they request with many other types of forces needed to conduct and support ongoing operations. For instance, officials in Afghanistan told us that when they initiate requests for additional personnel, the requesting unit is asked to offset the increase in forces on a one-to-one basis within the unit. If they are unable to do so, operational and medical officials determine if the request for additional medical forces takes precedence over the need for other types of personnel already in theater, and if so they decide which personnel will redeploy out of theater to stay within the authorized force cap. Second, shortages of and high demand for medical personnel in certain specialties also plays a role in decisions about whether to request medical forces. For example, officials in Iraq determined that 16 additional veterinary food inspectors were needed for food safety inspections, but they did not formally initiate that request due to the current shortage of these specialists. Although DOD primarily provides both emergency life-saving medical care as well as routine medical care to U.S. military personnel in Iraq and Afghanistan, it is unclear what level of routine medical care deployed DOD civilian employees can expect in theater. DOD relies on its own deployed civilians to carry out or support a range of essential missions, including logistics support, maintenance, intelligence collection, criminal investigations, and weapon systems acquisition. About 2,600 DOD civilian employees were deployed to Iraq, and about 2,000 DOD civilian employees were deployed to Afghanistan according to DOD’s April 2010 report to Congress on medical care for injured or wounded deployed U.S. federal civilians. In response to congressional interest, DOD reviewed the department’s existing policies for medical care for DOD deployed civilians and federal civilian employees that might be injured or wounded in support of contingency operations and reported to Congress on the results in April 2010. DOD noted in its report that with each new mission, the need for new civilian skills has resulted in an increase in the number of deployed civilians and that these civilians are not immune to the dangers associated with contingency operations, since they too incur injuries or wounds in their efforts to support the missions in Iraq and Afghanistan. Although DOD guidance clearly provides that deployed DOD civilians will receive life-saving emergency care, it is unclear to what extent DOD civilians can expect routine medical care in theater because a DOD directive and theater guidance differ with regard to their eligibility for routine medical care. Specifically, DOD Directive 1404.10 states that the department’s civilian employees who become ill, are injured, or are wounded while deployed in support of U.S. military forces engaged in hostilities are eligible to receive health care treatment and services at the same level and scope provided to military personnel. However, theater guidance for Iraq and Afghanistan, which provides detailed information on medical care to deployed civilians, among others, states that DOD civilians are eligible for emergency care but most routine care for them is subject to availability. This differs from the DOD directive that states care should be at the same level and scope provided to military personnel. In addition, we found that the theater guidance document for care in Afghanistan provided additional guidance that is inconsistent with both the DOD directive and with guidance provided elsewhere in the document as to the level of care to be provided to DOD deployed civilians. Specifically, one section of the guidance stated routine care for all civilians was to be provided subject to availability while another section of the same guidance stated routine care was to be provided for deployed DOD civilians in accordance with a previous issuance of DOD Directive 1404.10. The previous version of DOD Directive 1404.10 indicated that civilians designated as emergency essential employees would be eligible for care at the same scope provided to military personnel, while the current January 2009 DOD directive extends the provision of routine medical care to a much wider group of DOD deployed civilians. Medical officials in Afghanistan told us that they provide routine medical care to U.S. federal civilians on a space-available basis, and that they would not turn away any person with injuries that presented a danger to life, limb, or eyesight, regardless of the employment status of an individual. This issue has received continuing congressional interest. For example, in April 2008 the House Armed Services Committee Subcommittee on Oversight and Investigations issued a report on deploying federal civilians and addressed the medical care provided to them when they are wounded, ill, or injured while in a war zone. Furthermore, DOD’s report to Congress on deployed DOD civilians stated that the department believes it is imperative that each federal civilian understands where, when, and how they can receive medical treatment in theater. Although we did not learn of any deployed DOD civilians being turned away from receiving routine care in theater during the time of our review, officials in theater said it could be a concern if the number of DOD civilians that deploy increases, and that theater medical officials would assess the impact of any increase on the planning process for determining medical personnel requirements. However, if theater officials concluded that they needed more medical personnel due to increases in numbers of DOD deployed civilians, we recognize that an increase in medical resources would have to be balanced against other high-priority needed resources due to the force cap limiting the overall numbers of military personnel that can be in theater. For example, the former commander who oversaw military medical units in Afghanistan noted to us that while there is no medical-specific force cap, including a limit on the number of medical personnel within the larger force cap, any additional military personnel needed in theater must be balanced by the loss of other military personnel in other areas, such as a transportation unit, and that the force cap has played a role in their decisions in determining medical personnel requirements. Additionally, the current commander who oversees military medical units in Afghanistan stated that local base commanders can request additional medical personnel if they believe that the number of U.S. soldiers or civilians merits an increase. The official stated that an increase of about 800 to 1500 civilians would have to occur before they would consider revising military medical personnel requirements. At the conclusion of our audit, an Army official agreed that if there is an inconsistency between departmental guidance and theater guidance, it should be examined. As long as theater guidance differs from the requirements of departmental directives, uncertainty about deployed civilians’ eligibility for routine care in theater will remain and the military medical personnel requirements planning process may not be fully informed by department-level expectations. Theater commanders in Iraq and Afghanistan are providing quicker access to advanced emergency medical care by placing more medical units in more geographical areas to save lives. However, Army doctrine, which is the starting point for defining and planning a unit’s capabilities, has not been updated fully to reflect these changes in theater. Also, the organizational design of these medical units used in theater, which indicates the number and mix of skilled medical personnel these units should have, has not been updated to reflect current practice in theater. Specifically, commanders in Iraq and Afghanistan have been splitting or reconfiguring medical units typically designed to operate in one location into multiple smaller units to cover a wider geographical area. For example, as of December 2009 the Task Force 28th Combat Support Hospital in Iraq—a field hospital typically designed to be in one location—was split to be at three separate sites in Iraq—Baghdad, Tallil, and Al Kut—to better cover this large operational area. Theater medical commanders split these units because they found that the field hospital’s standard design configuration was no longer suitable for the model of care that has evolved in Iraq, which requires access to more advanced medical care—particularly surgical care—over large geographical distances to better save lives. Splitting medical units, such as level 3 combat support hospitals and level 2 forward surgical teams, in order to locate them in more areas increases the opportunities to provide advanced emergency care quicker and could save more lives. According to documents from the 28th Combat Support Hospital, the number of surgical sites has increased due to the emphasis on providing troops access to surgical care within 60 minutes of being injured. DOD has stated that by providing advanced life-saving emergency medical care quicker, generally within 60 minutes of injury, survival rates increase significantly. In fact, studies show that for those severely injured or wounded, 90 percent do not survive if advanced medical care is not provided within 60 minutes of injury, thus creating urgency for rapid access to the wounded. Medical officials in Iraq acknowledged that Army doctrine and the organizational design of medical units were top issues that needed to be updated to better reflect the current practice of splitting medical units such as combat support hospitals. For example, in a December 2009 Mid-Tour Report, the Task Force 1st Medical Brigade—the medical unit that provided oversight over medical units in Iraq before being replaced by Task Force 807th Medical Brigade—noted that the organization for combat support hospitals, including the list of needed medical specialties, should be redesigned to reflect the actual use of combat support hospitals across multiple locations and that certain lessons learned could be considered in the redesign. Specifically, Task Force 1st Medical Brigade reported that splitting full-sized combat support hospitals into smaller parts can create medical personnel gaps in certain specialties, including those related to the operation of pharmacies, laboratories, and patient administration. The medical brigade’s report also went on to note that personnel with these smaller combat support hospitals are spread so thinly that when personnel take leave or are evacuated out of theater due to injury, the medical brigade has to make difficult decisions on where to find needed personnel to mitigate coverage gaps. Given these lessons learned, officials with the Task Force 1st Medical Brigade told us that they were concerned about outdated policies, guidance, doctrine, and field manuals related to the determination of medical personnel requirements in theater and stated specifically that the current design of combat support hospitals is not flexible enough to accomplish what they are now being asked to do. As such, they now have to continuously use what is referred to as specialized personnel documents to manage staffing rather than staff as indicated in established doctrine and the organization design of these units. Specifically, officials with the Task Force 1st Medical Brigade noted to us that staffing of medical units is now done in a “very non-doctrinal fashion” and that they had similar concerns about splitting area support medical companies and using them in theater in a non-doctrinal fashion, given these area support medical companies now function as two separate level 2 troop medical clinics when they are staffed to function as one. Finally, the Task Force 1st Medial Brigade report went on to recommend that the organizational composition of combat support hospitals be redesigned to include redundant capability to accommodate expected attrition in staff. Additionally, officials with the U.S. Forces-Iraq Surgeon Office told us in a separate interview that medical doctrine, specifically the organizational design for both personnel and equipment, should be assessed and updated given the current experience in Iraq. These officials said that the splitting of combat support hospitals and forward surgical teams has gained acceptance over time but should be examined given how counterinsurgency doctrine is implemented in Iraq. These officials with the Surgeon Office in Iraq also said that flexibility in the doctrine is critical, but that doctrine needs to reflect the realities of operations on the ground and the degree to which current practice of splitting medical units has filtered into medical doctrine has been limited. Recognizing these lessons learned in an environment that is continuing to evolve to provide advanced medical care to save more lives, officials with the Army Medical Department Center and School who are responsible for updating medical doctrine and the organizational design of medical units recently updated the forward surgical team field manual, noting that changes in the number and mix of specialists that make up a forward surgical team might be necessary if such teams are to operate as smaller stand-alone units. However, the updated manual did not specifically suggest what those changes in the number and mix of medical specialists that make up a forward surgical team should be if the team is providing advanced emergency care as a stand-alone unit. We were told that Army planners have adjusted medical personnel requirements for forward surgical teams to account for changes in these smaller nonstandard medical unit reconfigurations by increasing the number of personnel assigned to those units, but the updated field manual still does not specify what the number and mix of medical specialists should be. Furthermore, by splitting or dividing the standard traditional design for combat support hospitals, DOD has also had to adjust the number and mix of medical personnel in those units as well. Instead of relying on the standard traditional doctrine design for medical units in theater, Army medical officials have been developing specialized personnel documents to staff these medical units to identify the medical skill sets now needed to operate split medical units across multiple locations for counterinsurgency operations. Specifically, officials with the Task Force 1st Medical Brigade told us these specialized personnel documents allow for more up-to-date establishment of personnel requirements to address gaps caused by splitting medical units. However, the process is difficult and it came about because current doctrine and organizational design were not sufficient to address the capabilities needed for splitting medical units such as combat support hospitals and area support medical companies. Although the Army medical officials we spoke with said that they believe splitting and reconfiguring units in theater is necessary and helps to increase survival rates by providing advanced life-saving emergency medical care generally within 60 minutes of injury, the Army has not fully incorporated these current practices into Army doctrine and organizational documents, which ordinarily determine the size, composition, and use of these units. In response to a draft of this report, DOD explained to us that Army leadership has recognized that split hybrid operations and the dispersed environment in the theater of operations have generated a requirement for additional medical structure. According to an Army regulation, the Army maintains a lessons learned program to, among other things, systematically update Army doctrine to enhance the Army’s preparedness to conduct current and future operations. By updating Army doctrine and organizational documents for the design of medical units that could be used in other theaters, the Army could benefit from incorporating its lessons learned, where appropriate, and be better assured the current practice of splitting medical units to quickly provide advanced life-saving emergency medical care to those severely injured or wounded does not lead to unnecessary staffing challenges. When medical personnel gaps unexpectedly arise in Iraq or Afghanistan, Army commanders have used two approaches to fill those gaps, according to medical officials in theater. Gaps in medical capabilities can occur when medical providers do not deploy as expected for reasons such as resignation, or a medical provider is determined to be medically nondeployable. Medical personnel gaps can also occur when individual medical personnel need to leave the unit for reasons such as an emergency situation at home or if they become seriously sick or injured in theater. According to medical officials in theater, when these gaps occur, Army commanders have used two approaches to fill these gaps: backfilling and cross-leveling. Backfilling involves the identification and deployment of medical personnel into theater from the United States or elsewhere who were not originally scheduled to deploy overseas at that time, according to medical officials in theater. For example, a dentist assigned to a brigade combat team in southern Iraq was evacuated out of theater for medical reasons. Given the backlog of needed dental work, commanders expressed concern about losing a dentist. In response, Army Forces Command initiated an effort to identify another dentist not in Iraq who was eligible to deploy to fill this need. DOD officials told us that selecting and deploying an active component medical provider to backfill a position typically takes about 45 days. Cross-leveling involves the temporary relocation of personnel from one unit in theater to another, according to DOD officials. Medical officials in theater told us that cross-leveling is often used as an interim measure to minimize risk when a gap in medical personnel coverage occurs. For example, an operating room nurse assigned to a forward surgical team in Iraq had an unexpected medical situation and was evacuated out of theater. It was critical that this personnel requirement be filled in a timely manner, given that the forward surgical team was staffed with only one operating room nurse. Theater officials requested a replacement from U.S. Army Forces Command and U.S. Army Reserve Command, but the individual identified as a replacement could not deploy for at least 30 days. Recognizing the high priority need for a forward surgical team to have an operating room nurse, Task Force 1st Medical Brigade identified an operating room nurse that it could borrow from another unit in theater until the replacement arrived. After the replacement nurse arrived in theater, the operating room nurse on loan returned to the unit the individual came from. Personnel gaps that occur in theater cannot always be prevented and when gaps do occur, theater commanders assess the risk associated with the gap and decide on an appropriate course of action, according to officials with Task Force 1st Medical Brigade. Cross-leveling in particular requires the assessment of risk associated with the personnel gap and the gap that would be created by the relocation of a medical provider from another unit. According to theater commanders we spoke with, cross-leveling, while temporary, is not an ideal solution and can present risk to medical operations in theater, especially when conducted on a recurring basis. We recognize that risk cannot be eliminated; it can only be managed. Army officials told us that they are willing to accept some risks in order to mitigate other risks they believe are higher. According to medical officials, when medical personnel gaps in an Army reserve component medical unit occur, it can be challenging to fill the gap before the start of the next 90-day rotation, given it can take around 120 to 180 days to identify, notify, and then mobilize an Army reservist to fill an unfilled requirement by which time the next expected 90-day medical provider has already arrived. The Army’s 90-day rotation policy— while intended to ease the financial burden of deploying reserve medical personnel and help retain them—has presented some challenges for the Army in quickly filling these gaps when a medical provider is not able to deploy. For example, the 9l5th Forward Surgical Team—an Army reserve medical unit—was authorized to deploy to Iraq in September 2009 with three general surgeons, according to theater medical officials. Instead, it deployed with only one surgeon for the first 90-day rotation, despite efforts to identify two other deployable general surgeons. The Army Reserve identified a doctor to fill one of the two vacancies; however this individual could not deploy due to an inability to be credentialed as a general surgeon. The Army Reserve then identified another surgeon for deployment, but this individual had educational requirements issues, and yet a third identified surgeon resigned. By the time the Army Reserve was able to identify a surgeon who could deploy, the 9l5th Forward Surgical Team had been in Iraq for a month out of its first 90-day rotation. Further, the Army was unable to identify the third authorized surgeon for the 9l5th Forward Surgical Team before the end of that 90-day rotation given another identified surgeon scheduled for deployment resigned, and the replacement surgeon turned out to be nondeployable for medical reasons. In fact, the 9l5th Forward Surgical Team did not have one out of its authorized three general surgeons for the first three 90-day rotations— approximately 270 days. Moreover, the 915th Forward Surgical Team was expected to operate as two smaller units at two separate locations in southern Iraq, but it was unable to provide surgical capabilities in both locations as expected without three authorized general surgeons. As a result of the personnel gaps, Task Force 1st Medical Brigade temporarily relocated medical personnel already in theater from other medical units to the 9l5th Forward Surgical Team so it could meet its mission. Although we found examples of the 915th Forward Surgical Team not having all of its medical personnel before the end of each 90-day rotation, Army data show the magnitude of these unfilled gaps or late arrivals for the reserve components ranged from about 3 percent to 7 percent from January 2008 to July 2010. Specifically, Army data showed that about 4 percent of mobilized Army reserve component 90-day medical rotators (21 medical providers out of 594) did not deploy to theater or arrive in theater on time for 2008. In 2009, that figure reached 7 percent (38 medical providers out of 519) and through the first 6 months of 2010, this figure was over 3 percent (8 medical providers out of 236). Unfilled reserve component personnel requirements can have serious consequences depending on the needed medical specialty. Therefore, medical commanders in theater typically cross-level to fill short-term temporary personnel gaps, although medical officials in Iraq we spoke with said cross-leveling is a less than ideal approach to fill these medical personnel gaps. DOD has continued to assess its need for medical personnel in theater based on the requirements of the mission and a variety of medical data and has made adjustments to meet specific theater needs to achieve the goal of providing advanced life-saving care quickly. DOD has noted that, increasingly, deployed civilians also face dangerous circumstances in ongoing contingency operations. While DOD has stated that deployed civilians will receive emergency care whenever needed, the extent of routine medical care available to DOD deployed civilians is unclear due to inconsistent guidance. Inconsistent guidance could potentially impact the medical personnel requirements planning process if medical officials in theater are uncertain about deployed DOD civilian employees’ access to routine medical care. While we did not learn of any deployed DOD civilians being turned away for medical care in theater during the time of our audit, DOD could still benefit by assessing the implications the inconsistencies in guidance could have if there were a sizeable increase in the number of DOD deployed civilians in theater. Conducting counterinsurgency operations in often uncertain, dangerous environments such as Iraq and Afghanistan, Army theater commanders have reconfigured the composition of field hospitals and forward surgical teams by breaking them down into smaller stand-alone units to better position them to give the severely wounded or injured, such as the casualties of blast-type injuries, the advanced emergency medical care needed to save lives. By being in more geographical areas, these critical life-saving medical units are better able to achieve their goal of providing advanced emergency medical care within 60 minutes of injury to increase survival rates. Acknowledging the current practice of splitting medical units, the medical brigade that provided oversight over medical units in Iraq reported that one of its top issues was advocating for updates to the doctrine and organizational redesign of these split units that govern its use and personnel allocation. By leveraging lessons learned collected from this practice, especially the needed number and mix of medical personnel, the Army could benefit from integrating these lessons systematically into Army doctrine and the design of these medical units. Updating doctrine and organizational design of these split medical units used in theater could help to assure that these units will be resourced with the needed number and mix of medical personnel to continue providing critical life-saving capabilities for counterinsurgency operations in other theaters and in the future. To better understand the extent to which deployed DOD civilian employees have access to needed medical care, as appropriate, we recommend that the Secretary of Defense direct the Combatant Commander of U.S. Central Command to clarify the level of care that deployed DOD civilian employees can expect in theater, including their eligibility for routine care. To enhance medical units’ preparedness to conduct current and future operations given the changing use of combat support hospitals and forward surgical teams in Iraq and Afghanistan, we recommend that the Secretary of the Army direct the Army Medical Department to update its doctrine and the organization of medical units concerning their size, composition, and use. In written comments provided in response to a draft of this report, DOD generally concurred with our findings and recommendations. DOD fully concurred with our first recommendation that the department clarify the level of care that deployed DOD civilian employees can expect in theater. DOD partially agreed with our second recommendation that the Army Medical Department update its doctrine and the organization of medical units concerning their size, composition, and use. DOD noted that there is an unquestionable need to formally update doctrinal publications. DOD also noted that the Army is constantly reviewing and assessing medical capability, the use of those capabilities and the organization of medical units, and updating doctrine to evolving staffing requirements. As an example, DOD mentioned in its official response that a recent review of medical capability indicated the need for additional medical personnel, and the Army responded with guidance to increase the number of enlisted health care specialists assigned to Army Brigade Combat Teams. The department also noted that the Army continues to capture lessons learned and input from commanders to ensure use of medical personnel meets requirements. We recognize that the Army continues to capture lessons learned and input from the commanders, and we noted in our report that the Army Medical Department Center and School has updated its forward surgical team field manual although updates to this field manual did not specifically note changes in the number and mix of medical specialists that make up a forward surgical team if the team is providing advanced emergency care as a stand-alone unit. Thus, we still believe the Army would benefit by fully updating the organization of medical units concerning their size, composition and use, as applicable, to incorporate current practices of splitting and reconfiguring deployed medical units in theater. DOD also provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, and appropriate DOD organizations. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3604 or by e-mail at farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to the report are listed in appendix IV. We examined the Department of Defense’s (DOD) efforts to identify and fill its military medical personnel requirements in support of operations in Iraq and Afghanistan. Specifically, we evaluated the extent to which (1) DOD has assessed its need for military medical personnel in Iraq and Afghanistan, (2) the Army has adapted the composition and use of its medical units to provide advanced medical care, and (3) the Army fills medical personnel gaps that arise in theater. During our evaluation, we contacted DOD and service officials, including officials from United States Forces-Iraq and United States Forces-Afghanistan; U.S. Central Command; U.S. Joint Forces Command; Joint Staff; Office of Secretary of Defense for Health Affairs; Offices of the Surgeons General for the Army, the Navy, and the Air Force; and U.S. Marine Corps Headquarters. For the first objective—to evaluate the extent to which DOD has assessed its need for military medical personnel in Iraq and Afghanistan to support ongoing operations—we analyzed DOD and service policies and processes that govern the determination of medical personnel requirements, including service doctrine, DOD guidance, and current theater-level guidance regarding medical care in Iraq and Afghanistan. Specifically, we compared a current DOD directive regarding medical care for DOD civilian employees and theater-level guidance regarding medical care for U.S. federal civilians, including DOD civilian employees, and noted how they differed. To augment our analysis, we interviewed officials, including representatives from the theater medical task forces and Surgeons’ offices in Iraq and Afghanistan about how they assess their military medical personnel needs in Iraq and Afghanistan and possible effects of differences in guidance that govern medical care in theater. For the second objective—to evaluate the extent to which the Army has adapted the composition and use of its medical units to provide advanced medical care in Iraq and Afghanistan—we reviewed reports from the medical task forces in theater, Army documentation of the composition of medical units in Iraq and Afghanistan, theater-level publications regarding medical care in Iraq and Afghanistan, Army medical doctrine, and Army field manuals for medical units. We interviewed officials, including officials with the medical task forces and Surgeons’ offices in Iraq and Afghanistan about the current use and composition of medical units in theater, and the extent to which they are captured within official Army documentation of doctrine and the organization of medical units. In addition, we interviewed representatives from the Army Medical Department Center and School, Directorate of Combat and Doctrine Development about the relevance of doctrine and the organization of medical units and the role lessons learned in Iraq and Afghanistan might play in any plans to update doctrine and the organization of medical units in the future. For the third objective—evaluate the extent to which the Army fills medical personnel gaps that arise in Iraq and Afghanistan—we reviewed the approaches used by Army theater medical commanders to meet medical personnel requirements when gaps in needed personnel coverage occurred and interviewed officials with the theater-level medical task forces and Surgeons’ offices in Iraq and Afghanistan regarding reasons why unexpected medical personnel needs arose and the approaches used to address those needs in theater. When possible, we obtained and reviewed supporting documentation, and interviewed other officials involved in these efforts, including officials with the U.S. Army Forces Command, to fill unexpected medical personnel needs in theater. We also reviewed policies and guidance for meeting medical personnel needs that arise in theater for both the active and reserve components, specifically the Army’s 90-day deployment policy for reservists applicable to physicians, dentists, and nurse anesthetists. To determine the extent to which the Army’s reserve component medical units deployed their authorized medical personnel in 2008, 2009, and through the first 6 months of 2010 to Iraq and Afghanistan, we reviewed Army’s deployment data on late deployments of medical providers from the reserve components. We assessed the reliability of the data by interviewing the agency official responsible for manually collecting and summarizing the data. We determined that the data were sufficiently reliable for the purposes of this report. Additionally, to better understand how military medical personnel requirements are met, we obtained information on DOD’s Global Force Management process and how the services identify medical units and personnel to fill these requirements. We interviewed officials with the Joint Staff, U.S. Joint Forces Command, and the military services’ force providers to include U.S. Army Forces Command, U.S. Fleet Forces Command, U.S. Air Combat Command, and U.S. Marine Forces Command, as well as officials with the Army Medical Command, the Navy Bureau of Medicine and Surgery, and the Air Force Personnel Center about their processes for filling in-theater military medical personnel requirements. For a more comprehensive listing of the organizations and offices we contacted, see table 1. We conducted this performance audit from August 2009 through January 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Defense (DOD) uses its Global Force Management process to meet its requirements, including those for medical personnel and units. For ongoing operations, this process periodically examines requirements for rotational forces as well as emerging requirements as they arise. In addition, the services each use unique yet similar processes to identify and select medical units and personnel to fill requirements for Iraq and Afghanistan. DOD designed the Global Force Management process to provide insight into the global availability of U.S. military forces. For the rotational force management process, requirements are identified 2 years in advance. The rotational force management process is facilitated through Global Force Management Boards, which are typically held on a quarterly basis. The Global Force Management Board brings together general officers from interested parties—Office of the Secretary of Defense, the Joint Staff, the combatant commanders, the services, and the joint force providers—to specifically lay out known requirements, review and endorse sourcing recommendations and associated risk and risk mitigation options, and then to prioritize and meet the requirements as appropriate. The product of these Global Force Management Boards is the Global Force Management Allocation Plan, a document that is approved by the Secretary of Defense, which authorizes force allocations and deployment of forces in support of combatant commander rotational requirements. In both Iraq and Afghanistan, medical personnel and unit requirements are included in the Global Force Management Allocation Plan, which provides an approach for U.S. Central Command, the services, and the services’ force providers to manage the sourcing of rotational requirements, including requirements for medical personnel and units, such as the Balad Theater Hospital in Iraq or a combat support hospital in Afghanistan. For requirements, including medical personnel and units, that are not known in advance, DOD used the emergent force management process extensively to meet requirements through requests for forces. Generally, the parties involved in this process have separate, sequential roles in the process. Requests for forces are generated by combatant commanders and submitted to the Joint Staff for validation, and then to the joint and service force providers to identify potential sourcing solutions to fill requirements before being transmitted to the Secretary of Defense for approval. In sourcing requests through the emergent process, requirements are prioritized according to a force allocation decision model. While emergent requirements are considered within the model’s general framework, each request for forces is individually evaluated as it is received, meaning that officials focus on whether or not forces are ready and available to fill the request rather than trying to determine the relative priority of the request, as is done at the Global Force Management Boards for rotational requirements. As part of providing and evaluating potential solutions for the request for forces, the services’ force providers often conduct risk assessments to provide information on the availability and readiness of both active and reserve forces. These risk assessments include violations of the services’ rotation policies regarding the required time at home for servicemembers and the impact to current missions and operations, such as the staffing of U.S. military treatment facilities in the case of medical personnel, if a service is selected to meet the requirement. In addition, each of the services maintains a list of specialties that are in high demand relative to available personnel. All of the services identified critical care nurse, physician assistant, psychiatry, and clinical psychology as high-demand specialties. The services use unique yet similar processes to identify and select medical units and personnel to fill requirements for Iraq and Afghanistan. Once the Secretary of Defense designates a service to meet an emergent or rotational requirement, the service’s force provider then begins the process of filling the requirement with personnel. While the procedures and systems used by each service to select the appropriate medical personnel vary, the services’ processes for filling requirements all result in a unit and its personnel deploying to an operational theater to carry out a mission. The identification of individual medical personnel to fill the requirements is important because medical personnel across the services typically are assigned to fixed military treatment facilities caring for active duty personnel, their dependents, and retirees. However, in wartime, each service’s medical personnel processes allow for the deployment of medical personnel from fixed military treatment facilities to support contingency operations, such as Iraq and Afghanistan, while considering potential impacts on the medical mission of the fixed military treatment facilities. In addition, the processes attempt to distribute the burden of deployments within and across medical specialties (e.g., orthopedic surgeons, critical care nurses, and psychiatrists), to comply with service guidelines, such as required time at home for servicemembers, to maintain a healthy inventory of medical specialists. In addition to the contact above, Laura Talbott, Assistant Director; John Bumgarner; Susan Ditto; K. Nicole Harms; Stephanie Santoso; Adam Smith; Angela Watson; Erik Wilkins-McKee; Michael Willems; and Elizabeth Wood made major contributions to this report.
For ongoing operations in Afghanistan and Iraq, military medical personnel are among the first to arrive and the last to leave. Sustained U.S. involvement in these operations has placed stresses on the Department of Defense's (DOD) medical personnel. As the U.S. military role in Iraq and Afghanistan changes, the Army must adapt the number and mix of medical personnel it deploys. In response to Congress' continued interest in the services' medical personnel requirements in Iraq and Afghanistan, GAO evaluated the extent to which (1) DOD has assessed its need for medical personnel in theater to support ongoing operations, (2) the Army has adapted the composition and use of medical units to provide advanced medical care, and (3) the Army fills medical personnel gaps that arise in theater. To do so, GAO analyzed DOD policies and procedures on identifying personnel requirements, deploying medical personnel, and filling medical personnel gaps in Iraq and Afghanistan, and interviewed officials. Medical officials in theater continually assess the number and the types of military medical personnel they need to support contingency operations in Iraq and Afghanistan and analyze the risks if gaps occur. Given congressional interest about deployed civilians, DOD reported to Congress in April 2010 that with each new mission, the need for new civilian skills has resulted in an increase in deployed civilians and that these civilians are not immune to the dangers associated with contingency operations. Although GAO did not learn of any DOD deployed civilians turned away for care in theater during this review, it is unclear the extent they can expect routine medical care in theater given that a DOD directive and theater guidance differ with regard to their eligibility for routine care. By clarifying these documents, DOD could reduce uncertainty about the level of routine care deployed DOD civilians can expect in theater and provide more informed insights into the military medical personnel requirements planning process. Army theater commanders have been reconfiguring or splitting medical units to cover more geographical areas in theater to better provide advanced emergency life-saving care quicker, but Army doctrine and the organizational design of these units, including needed staff, have not been fully updated to reflect these changes. Studies show that for those severely injured or wounded, 90 percent do not survive if advanced medical care is not provided within 60 minutes of injury. Officials in theater told GAO they are using specialized personnel documents to staff these medical units with more up-to- date personnel requirements to address gaps caused by splitting medical units, and that current doctrine and organizational design were not sufficient to address the capability needed for splitting medical units. According to an Army regulation, it maintains its lessons learned program to systematically update Army doctrine and enhance the Army's preparedness to conduct current and future operations. By updating Army doctrine and organizational documents for the design of medical units that could be used in other theaters, the Army could benefit from incorporating its lessons learned, where appropriate, and be better assured the current practice of splitting medical units to quickly provide advanced life-saving emergency medical care to those severely injured or wounded does not lead to unnecessary staffing challenges. Army commanders have used two approaches--cross-leveling and backfilling--to fill medical personnel gaps that arise in theater due to reasons such as illnesses, emergency leave, and resignations of medical personnel. When these gaps in needed medical personnel occur, the Army's 90-day rotation policy--while intended to ease the financial burden of deploying reserve medical personnel and help retain them--has presented some challenges in quickly filling these gaps in theater with reserve medical personnel when a medical provider is not able to deploy. However, Army data show the magnitude of these unfilled gaps or late arrivals for the reserve component medical providers ranged from about 3 percent to 7 percent from January 2008 to July 2010. GAO recommends that (1) DOD clarify the level of routine medical care that deployed DOD civilian employees can expect in theater and (2) the Army update its doctrine and the organizational design of split medical units. In response to a draft of this report, DOD generally concurred with the recommendations.
DOD has established criteria for the types of items that are required to be marked with IUID labels. For principal end items and contractor-marked secondary items, the criteria are as follows: 1. all items for which the government’s unit acquisition cost is $5,000 or more; 2. items for which the government’s unit acquisition cost is less than $5,000, when identified by the requiring activity as DOD serially managed, mission-essential or controlled-inventory; 3. when the government’s unit acquisition cost is less than $5,000 and the requiring activity determines that permanent identification is required; and 4. regardless of value, (a) any DOD serially managed subassembly, component, or part embedded within an item and (b) the parent item that contains the embedded subassembly, component, or part. For secondary items in use or in inventory, the criteria are as follows: 1. all DOD serially managed items including, but not limited to: sensitive, critical safety, or pilferable items that have a unique item-level traceability requirement at any point in their life cycle; and all depot-level reparable items; and 2. any other item that the requiring activity decides requires unique item- level traceability at any point in their life cycle. In order to use IUID technology, four processes must be completed. First, each item that qualifies for IUID marking—according to DOD’s criteria—is labeled. IUID labels often contain some human-readable information, printed as text on the label. The amount and type of human-readable information varies, but it often contains key details about the item, such as its National Stock Number, part number, or serial number. These are categories that DOD components use to identify items in their inventories. In certain cases, items do not have labels attached to them, and instead are labeled through a process known as direct part marking, in which the human- and machine-readable information are applied directly to the item. In addition, each individual label contains information about the item, encoded in a machine-readable, two-dimensional image printed on the label. Known as a data matrix, this image contains various pieces of information encoded in a two-dimensional bar code. Figure 1 shows a data plate with an IUID data matrix in the lower right-hand corner. When combined, the pieces of information encoded in the data matrix make up a globally unique string of numbers referred to as the item’s UII number. To ensure that all items marked with IUID labels are globally unique, DOD requires that UII numbers be formatted according to international standards for syntax format. Further, UII numbers must be entered into DOD’s IUID Registry. The registry is a database intended to ensure that each UII number is unique. An item’s UII number may be entered into the registry in one of two ways: the item is marked with an IUID label and the UII number associated with that label is registered, or DOD or contractors can establish a “virtual” UII number. According to DOD guidance, these virtual UII numbers are assigned to an item that has not yet been marked with an IUID label. The guidance states that a virtual UII number may be used due to economic or logistical considerations. For example, a DOD component may virtually mark one item that is embedded in another item, so that DOD does not have to remove the embedded item solely to mark the embedded item. For legacy items already in a component’s inventory, marking and registration are the responsibility of the component. For items that are not yet in DOD’s inventory and are being delivered to a component—or for property furnished by the government to a contractor—it is the responsibility of the contractor to mark or register items. If the UII number in a data matrix is improperly formatted, it cannot be used to properly identify the item, and the data matrix must be replaced. Second, personnel must electronically read the label’s data matrix, the two-dimensional bar code in which the item’s UII number is encoded. There are several types of tools than can be used for this process, including hand-held scanners and web-based software that can read an image of a data matrix. Because data matrices cannot be read visually, electronically reading the matrices is the only way to access the UII data they contain. Figure 2 shows an electronic scanner being used to read a data plate with an IUID data matrix. Third, UII data from a data matrix is passed to an IT system. According to DOD officials, there are a variety of IT systems that have a requirement to use UII data, and some of these systems currently have the capability to store UII data. Examples include the Army’s Property Book Unit Supply Enhanced, the Navy’s Configuration Data Managers Database–Open Architecture, the Marine Corps’ Joint Asset Maintenance Integrated Support System, and the Air Force’s Automated Inventory Management Tool. DOD officials have explained that some of these systems operate in “pockets” within the components, and do not share UII data across the components or DOD-wide. For instance, the Air Force’s Automated Inventory Management Tool contains UII data that is specific to an Air Force installation, and does not have the capability to share these data with other installations. DOD’s goal is for the components to share UII data across each of their individual IT systems, and DOD-wide, between components. Within DOD, this type of data sharing is characterized as “enterprisewide.” In order to accomplish enterprisewide data sharing of UII data, the components intend to use certain IT systems referred to as Enterprise Resource Planning systems. These automated systems consist of multiple, integrated functional modules that perform a variety of business-related tasks, such as general-ledger accounting, payroll, and supply chain management. DOD officials have explained that certain Enterprise Resource Planning systems will provide the capability to share UII data enterprisewide, and in this report we are focusing on DOD efforts to integrate IUID with these systems.IT system, such as an Enterprise Resource Planning system, DOD intends to store UII data and share these data within and across DOD organizations. In addition, IT systems can use software to analyze these Once UII data is uploaded into an data to improve logistics processes, such as property accountability and maintenance, in DOD’s supply chain. Fourth, to achieve IUID technology’s potential benefits in many logistics processes, DOD personnel will have to periodically repeat the previous steps, including scanning the matrix on an item’s label; uploading the UII data into an IT system; and then storing, sharing, and analyzing the data as required by the specific logistics process. For example, in a property- accountability process that we observed, each time a weapon was checked in or out of an armory, personnel scanned the label’s data matrix; the matrix’s UII number was uploaded into an electronic property book; and software then matched the weapon’s UII number with data that identified the weapon’s owner. For many of the logistics processes in which IUID could be used, these steps would be repeated throughout an item’s life cycle. It is DOD’s goal for its components to share UII data departmentwide, and the components are to use these data for unique item tracking. The Office of the Deputy Under Secretary of Defense for Logistics and Materiel Readiness—under the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics—is the focal point for implementing IUID capabilities for DOD’s supply chain materiel. As of July 2011,DOD-wide IUID implementation to the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration (ODASD). According to component officials, each component has multiple organizations carrying out IUID implementation tasks, such as creating policy and defining requirements; planning and budgeting for implementation of policy; and executing requirements. In addition, according to component officials, each component has an office that that office delegated the responsibility for programmatic lead of maintains the lead in IUID implementation policy. These respective offices are: Army—the Office of Life Cycle Logistics Policy, in the Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology; Navy—the Office of the Assistant Secretary of the Navy for Research, Development, and Acquisition; Marine Corps—the Office of the Director, Logistics Plans, Policies, and Strategic Mobility, in the office of the Deputy Commandant of the Marine Corps for Installations and Logistics; Air Force—the Directorate of Transformation, Deputy Chief of Staff for Logistics, Installations and Mission Support; and Defense Logistics Agency—the Office of Logistics Management Standards. Also, to facilitate intercomponent communication and collaboration, the components have established a number of working groups and other bodies to facilitate coordination on IUID implementation. The Joint Logistics Board determined that there were ambiguities concerning DOD’s IUID policy, requirements, and proposed value across DOD, as well as wide variation in the components’ implementation strategies, execution, and funding of IUID implementation. As a result, in 2009 the board chartered the task force, led by the Assistant Deputy Under Secretary of Defense for Maintenance Policy and Programs, and including representatives from the components and the Office of the Secretary of Defense. The task force had several goals, to include assessing the value of IUID within DOD’s supply chain, and recommending changes to policy and guidance to adequately align IUID implementation with the task force’s evaluation of IUID’s value. The task force issued a report with recommendations in June 2010 that estimated financial costs of IUID implementation, as well as financial and nonfinancial benefits. Specifically, the report stated that DOD could begin to achieve net financial benefits of IUID implementation in fiscal year 2017. In addition, the task force recommended modifying some of DOD’s IUID-marking criteria. Subsequently, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics modified DOD’s IUID marking criteria to implement some of the task force’s recommended changes, issuing the modification in December 2010. In addition, the task force issued a revision to its initial estimates, lowering its cost DOD’s key IUID estimate. The revision was issued in March 2011.policy issuances and implementation events from fiscal year 2003 to fiscal year 2011 are summarized in appendix II. DOD has taken some steps to improve its approach to managing and implementing IUID technology, but has yet to incorporate some key elements of best management practices into its evolving framework for management of IUID implementation. These include internal controls and analysis of return on investment. According to GAO’s previously published work, internal controls are important in helping agencies improve operational processes and implement new technological developments. Internal controls include an organizational structure that clearly defines key areas of authority; policies that enforce management directives; goals; and performance measures. In addition, GAO and DOD have identified best practices for analyzing a program’s return on investment. The practices identified by GAO include providing estimates of all potential costs and the timing of costs. DOD has identified best practices that include analyzing benefits, and making recommendations, based on relevant evaluation criteria. DOD has defined key areas of authority and responsibility for IUID implementation, and is updating policy to incorporate changes required by the implementation of IUID. However, DOD has not incorporated other key elements of best management practices into its evolving framework for management of IUID implementation. For example, DOD lacks such key information as quantitatively defined goals for marking legacy items; performance measures, such as reliable schedules for predicting when its Enterprise Resource Planning systems will have the capability to manage items using UII data; and a full estimate of IUID’s cost and benefits. Without a management framework that includes quantitatively defined goals, performance measures, and a more complete estimate of all associated costs and benefits, DOD has faced challenges in implementing IUID technology and runs the risk of not fully realizing its potential benefits, including financial benefits by fiscal year 2017. ODASD(SCI) and the components have taken some steps that could improve DOD’s management approach for IUID implementation. According to the DASD(SCI), his office is in the process of developing a framework for managing and implementing IUID, which we reviewed. As of March 2012, this framework consisted of two elements. The first element is a set of July 2011 briefing slides titled “IUID Game Plan and Actions Underway.” The slides include a summary of actions that DOD needs to take in key areas of IUID implementation such as the marking of items, the use of IUID in business processes, and modifying IT systems to incorporate IUID. These slides also indicate that ODASD(SCI) is following some best practices of a comprehensive management framework. In the slides, DOD clearly defines the Office of the Assistant Secretary of Defense for Logistics and Materiel Readiness as the DOD organization responsible for leading IUID implementation activities. As previously discussed, that office delegated the responsibility to ODASD(SCI). In addition, the slides discuss DOD policies that enforce management directives concerning IUID implementation and that are being updated to incorporate IUID. The second element of DOD’s framework for managing and implementing IUID is a January 2012 timeline listing several planned implementation actions from fiscal years 2012 through 2017. The components have also taken some steps toward improving their management of IUID implementation. The Army and Marine Corps are using some quantifiable goals—and certain IUID marking criteria—to track the progress of their legacy marking efforts. In addition, officials from some components told us that they had inspected some newly- acquired items to determine whether these items were sufficiently marked with IUID labels. For the items they reviewed, those inspections helped to detect problems in contractors’ marking of items. Further, the Marine Corps and Air Force are planning to integrate IUID elements into implementation schedules for their Enterprise Resource Planning systems. DOD has not fully incorporated internal controls, such as quantifiable goals or metrics to assess progress, into its framework for management of IUID implementation. An agency’s establishment of internal controls is key in helping an agency meet its goals, and we have previously reported that in the absence of quantifiable targets, it is difficult for officials to assess whether goals were achieved, because comparisons cannot be made between projected performance and actual results. DOD has identified tens of millions of legacy items that meet its IUID marking criteria, but has not developed a full set of quantifiable goals or metrics to assess its progress in marking these items. The task force has stated that DOD will not achieve IUID’s potential benefits unless DOD marks a “significant” number of these legacy items, and an ODASD(SCI) official stated that DOD needs to mark a “majority” of these legacy items by fiscal year 2015. However, the task force and ODASD(SCI) have not quantitatively defined the terms “significant” or “majority,” respectively. Further, according to the task force report, the number of legacy items DOD will mark is an important factor in determining when DOD may begin to realize IUID’s projected financial benefits. Therefore, without metrics to quantify its progress in marking legacy items, it is unclear whether DOD will begin to realize these benefits by fiscal year 2017, the year in which the task force report projects these benefits may begin. GAO-11-53 and GAO-08-883T. have adequate schedules for the integration of IUID with their Enterprise Resource Planning systems. We have previously reported that an important element of measuring performance is the collection of data that are complete and consistent enough to document performance and support decision making. As previously discussed, DOD has established IUID marking criteria for different categories of inventory. We found that some components used these criteria to track their progress in marking legacy items; however, others did not. The DASD(SCI) stated that he has asked the components to periodically report on their progress in marking legacy items to his office. However, he stated that he has not asked the components to use DOD’s IUID marking criteria in their reporting. We have previously reported that another key element of best management practices is analysis of return on investment. A complete analysis on the return on investment consists of several best practices identified in previously published GAO and DOD work. The practices identified by GAO include providing estimates on all potential costs and the timing of costs. DOD has identified best practices for analyzing benefits, and making recommendations, on the basis of relevant evaluation criteria as a best practice. In addition, GAO has reported that performing a sensitivity analysis of these estimates demonstrates the effects on the cost and schedule of an assumption that may no longer be valid. DOD began implementation of IUID in fiscal year 2003, and in fiscal year 2009 the task force estimated some costs and benefits of implementation. The task force report discusses how IUID could improve DOD’s logistical efficiency and effectiveness. In addition, the report provides a “rough order of magnitude” assessment of certain costs and financial benefits of implementation; and projects when DOD may begin to realize financial benefits. However, the components, ODASD(SCI), and the task force did not fully follow certain best practices for the estimation of costs and financial benefits. As a result, DOD does not have a full analysis of IUID’s potential return on investment. DOD’s components could not provide complete historical and planned spending data for IUID implementation, and ODASD(SCI) has not tracked the components’ spending or budget requests for IUID. According to GAO best practices for cost estimation, historical data on the cost of a system is important for projecting a credible estimate of future costs. Although the Marine Corps and Air Force provided us with complete estimates of the amount of money they have spent in their IUID budgets, the Army and Navy provided incomplete estimates of their IUID spending, and the Defense Logistics Agency did not provide an estimate. These components were not able to provide complete estimates because they do not track IUID spending as a distinct budget category. In addition, ODASD(SCI) does not track the five components’ historical spending on IUID. Although the components were not able to provide complete historical spending information, according to the information they provided, the components spent at least $219 million on IUID implementation from fiscal year 2004 through fiscal year 2011. Officials explained that they spent this money on a variety of IUID implementation efforts, including the acquisition of marking equipment, such as IUID label printers and scanners; the marking of legacy items; and the development of software to support marking processes. Table 1 summarizes the five components’ reported historical spending on IUID over 8 fiscal years. GAO-09-3SP. The components’ total reported spending in fiscal year 2011 dollars is $226.1 million. The Army provided an estimate of the money spent on IUID implementation in its depots over this period but, as an official explained, could not provide an estimate of IUID spending outside of its depots because it does not track IUID funding as a distinct budget category. The Navy was also able to report on a portion of its IUID spending that was executed by one office within the Navy, but Navy officials stated that because the Navy has not funded IUID implementation in a centralized fashion, the Navy cannot track how much other Navy offices have spent in implementing IUID. An official from the Defense Logistics Agency explained that while the agency does spend money on IUID implementation, it does not have a distinct budget for IUID implementation, and so cannot specifically track its IUID costs. With regard to future spending, the Marine Corps and the Air Force reported that they requested a total of $19.2 million (Marine Corps: $10.8 million; Air Force: $8.4 million) for IUID implementation in fiscal year 2012. However, the Army and Navy were unable to provide their budget requests for fiscal year 2012 because they do not budget for IUID spending as a distinct budget category. According to officials from the Defense Logistics Agency, an office within the agency submitted a request for IUID implementation spending, but this request was not included in the agency’s final fiscal year 2012 budget request. In addition to the components not fully tracking their spending on IUID implementation, ODASD(SCI) does not track the components’ spending on IUID implementation. ODASD(SCI) explained that the office uses a set of “scorecards” to track the components’ IUID implementation efforts. In 2011, ODASD(SCI) received two sets of scorecards, the first in January and the second in November. In our review of these scorecards, we found that the components do not report information on either the amount of money they have spent on IUID implementation, or the amount of money that they plan to request for IUID implementation. In its report, the task force estimates that DOD would need to invest $3.2 billion to realize the benefits of IUID implementation. This estimate includes several types of costs, such as the cost of marking legacy items and the labor required to perform various implementation tasks. However, the estimate does not include the following four costs: The task force could not completely estimate the costs associated with how individual logistics processes would need to change to incorporate the use of IUID technology, because it did not have sufficient information about these changes. As a relatively new technology, IUID is not widely used in existing DOD logistics processes. That means that the components will have to modify existing processes to use IUID. DOD has made some progress in defining the type of logistics processes that would need to change to incorporate IUID. For example, according to ODASD(SCI), DOD intends to modify 10 different categories of logistics processes, such as receipt and distribution of items. DOD has made some progress in planning for such modifications. For example, the Army Materiel Command has determined that 11 of its logistics processes will incorporate IUID functionality through one of the Army’s Enterprise Resource Planning systems. However, according to Army officials, the Army has not yet defined the distinct steps at which personnel will use IUID, within its logistics processes. Further, according to DOD officials, the components have not determined the full number or type of business processes within these categories that need to be modified. In addition, in their 2011 scorecard updates, the Army, Marine Corps, and Air Force reported challenges or concerns with progress in defining how their business processes will require modification because of IUID. Without information on the number of logistics processes that will require modification and the specific steps that will need to change to accommodate the incorporation of IUID, the task force could not have completely estimated costs. Several DOD officials we spoke with agreed with this assessment, explaining that without this information, the task force was not able to completely estimate IUID implementation costs. The task force report states that the task force did not include the cost of modifying Enterprise Resource Planning systems to share and use UII data because the functionality to use UII data is inherent in these systems. However, if DOD is to achieve benefits from IUID, Enterprise Resource Planning systems must have the ability to share and analyze UII data, and according to DOD officials the ability to perform these two functions is not always inherent in these systems. For example, the core software package for one of the Army’s Enterprise Resource Planning systems has the capability to accept UII data, but in its current version this capability is not activated. For the system to accept UII data, the capability must be activated in a future update. According to DOD officials, such modifications to Enterprise Resource Planning systems can incur costs. Navy officials explained that the task force report does not include the additional cost that would be required to modify the Navy’s systems to communicate with other components’ systems, or to analyze UII data to improve logistics processes. By not including certain costs to modify the relevant Enterprise Resource Planning systems, the task force may have left out a substantial set of costs. The report did not include the full cost of marking newly-acquired items with IUID. Because the Defense Federal Acquisition Regulation Supplement contract clauses provide that contractors are responsible for marking or registering items, contractors are likely to build the cost of marking items into their contract pricing. Therefore, while contractor personnel are marking the items in question, this cost is borne by DOD to the extent that the contractor has built the cost of marking qualifying items into the costs of the goods or services provided. According to component officials, the components do not know how much of a newly-acquired item’s cost is attributable to contractors’ marking of that item. However, according to estimates in the task force report and provided by the Air Force, contractors’ average cost to mark a newly-acquired item is about $30 to $50. On the basis of GAO analysis of information from the DOD registry on the number and rate of contractors’ registration of newly-acquired items in January 2012, IUID marking by contractors could result in approximately $27 million to $45 million in marking costs to the components per year. This cost is not included in the task force’s estimate. As previously discussed, an IUID label on an item may contain human-readable information, such as the item’s National Stock Number. In addition, the label has a data matrix that contains information about the labeled item. For example, the matrix contains various pieces of information that make up the globally unique string of numbers referred to as the item’s UII number. Contractors have delivered items to DOD that have labels with deficient data matrices. If a label has a deficient data matrix, DOD cannot use the label’s data matrix to track or manage items. The task force did not include the cost to the components of fixing these deficient IUID data matrices on contractor-marked items. Although component officials were not able to estimate the financial cost of fixing these data matrices, officials explained that there is productivity loss associated with fixing them. For example, according to Army and Marine Corps officials, their verification of a deficient matrix’s missing or incorrect data requires an average of 10 to 15 minutes, and sometimes requires significantly more time, when additional research into the item is needed. This investment of time, multiplied by many thousands of deficient data matrices, may result in a substantial amount of lost productivity for DOD components’ personnel. The task force report assessed the potential benefits of IUID implementation by examining three categories of logistics processes: intensive item management, property accountability, and product life cycle management. According to the task force’s analysis, DOD’s implementation of IUID is unlikely to result in substantial financial benefit in the categories of intensive item management or property accountability. However, the report did discuss potential nonfinancial benefits of IUID implementation in these two categories. The report states that the use of IUID in intensive item management could enable strict accountability and control of DOD’s most critical assets—such as nuclear weapon–related material—across parts of DOD’s supply chain, enhancing the security and safety of such assets. Moreover, according to the report, implementing IUID into property-accountability processes on the enterprise level could enable DOD to track equipment assets throughout their life cycle. The report explains that one benefit of tracking on the enterprise level is that DOD may be able to more quickly address equipment losses. Further, according to an official from the Office of Defense Procurement and Acquisition Policy, the use of IUID in DOD’s logistics processes could lead to improved data quality that may result from automatically entering data into IT systems, as opposed to manually entering data. While the report does not estimate substantial savings through the integration of IUID with intensive item management or property- accountability processes, the report estimates that IUID implementation could result in annual savings of $3 billion to $5 billion through the implementation of IUID in a collection of maintenance processes referred to as product life cycle management. According to our review of the report and a task force official, the key to achieving savings through product life cycle management is to track and manage individual items by a unique identifier. That approach is called serialized item management. According to DOD officials and the task force report, DOD can achieve serialized item management by using any type of unique identifier, including a traditional serial number; a UII number; or a unique identifier provided by a different type of technology, such as a radio frequency identification device or a contact memory button. The task force estimated achieving substantial financial benefits from product life cycle management, but it used a methodology for estimating these benefits that may not be appropriate to the scale and complexity of DOD’s IUID implementation efforts. The IUID task force report states that projected savings will gradually increase as implementation of IUID spreads throughout DOD and, by fiscal years 2016 to 2017, DOD may reach a break-even point at which its annual financial savings would equal its annual spending for implementation of IUID. After fiscal year 2017, the report projects that DOD may pass the break-even point, and could begin to realize the annual savings of $3 billion to $5 billion. To develop its estimate of cost savings through the use of serialized item management in product life cycle management, the task force used the following methodology: Reviewed case studies of five DOD maintenance programs that use serialized item management. The task force observed that by using the maintenance programs reduced serialized item management,costs by an average of 4 to 6 percent in labor and materiel costs for maintenance, and in the cost to transport items to maintenance locations. Next, the task force estimated maintenance costs by adding DOD’s fiscal year 2008 budget for depot- and field-level maintenance to DOD’s fiscal year 2009 budget for maintenance transportation, which together total about $83.2 billion. Finally, by applying the 4 to 6 percent reduction to $83.2 billion in annual maintenance costs, the task force estimated that an annual savings of $3 billion to $5 billion could result from the use of serialized item management in product life cycle management maintenance processes. However, three aspects of this methodology call into question whether it is reasonable to assume that DOD-wide use of IUID technology in maintenance processes would lead to the savings estimated by the task force. The task force estimated DOD-wide savings on the basis of a limited number of case studies, and conclusions developed from these studies may not be applicable to the substantial complexity and size of the DOD-wide maintenance enterprise. According to the Office of the Deputy Assistant Secretary of Defense for Maintenance Policy and Programs, DOD’s maintenance operations support a wide range of weapon systems including about 280 ships, 14,000 aircraft and helicopters, 900 strategic missiles, and 30,000 combat vehicles. Our review of the Task Force report indicates the five case studies address—in total—five individual weapon systems, whereas DOD performs maintenance on hundreds of different systems. Moreover, four of the five case studies address either Air Force or Navy programs; only one addressed Army programs. Because of the limited scope of the case studies used by the task force, conclusions based on these case studies may not apply to the DOD-wide maintenance budgets that the task force used in its estimation of savings. The case studies did not address programs that use IUID as the technology that provides a unique identifier to track items through serialized item management. Rather, the case studies addressed programs that use other means of uniquely tracking items, such as contact memory buttons. Thus, the case studies do not consider costs that may be specific to IUID technology, such as the cost to purchase scanners or software to read data matrices, or the cost to replace deficient IUID data matrices. Because of this, it may be inaccurate to assume that maintenance programs using IUID technology will achieve the same type or amount of savings claimed by the case studies of programs using other technologies. Even when a logistics program experiences cost savings after introducing a new technology or process, it can be difficult to link the savings directly to a specific cause or technology. For example, we visited an installation that is using a combination of IUID, passive radio frequency identification, new database software, and a reorganization of warehouse space to reduce the cost of managing its supply chain. However, an installation official explained that it was not possible to determine the extent to which the cost savings were attributable to a specific change, such as the introduction of IUID. For this reason, it may have been incorrect for the task force to assume a link between estimated cost savings and the use of a specific technology such as IUID. We have previously reported that every estimate is uncertain because of the assumptions that must be made about future projections, and because of this, cost estimators should always perform a sensitivity analysis that demonstrates the effects on the cost and schedule of an assumption that may no longer be valid.that DOD could begin to realize financial savings from IUID implementation after fiscal year 2017, and explains that its cost and benefit estimates are conditional, depending on a number of assumptions. However, the task force report does not contain a sensitivity analysis for either its cost or its benefit estimates. As a result, the task force’s report does not portray the potential effects of changing key assumptions on the report’s estimates of cost, financial benefits, or the time frames in which the report estimates DOD may realize financial benefits. The task force report estimates There is a substantial amount of uncertainty associated with key assumptions on which the task force report’s estimates are based. For example, the task force report states that the cost to mark legacy items is one of the primary drivers of IUID implementation costs. However, as discussed in more detail later, DOD may face challenges in determining the total number of legacy items it must mark. The task force’s cost estimate does not reflect the range of costs associated with marking a population of legacy items that—according to DOD estimates—may range between about 60 million and 122 million items. In addition, the task force’s estimate of financial benefits assumes that the components’ IT systems, including their Enterprise Resource Planning systems, will have the capability to use UII data for product life cycle management in 2015. However, as discussed later in the report, the components cannot reliably predict when their Enterprise Resource Planning systems will be able to use UII data for product life cycle management. The task force’s estimate of when DOD will begin to realize these benefits does not reflect the possibility that the components’ Enterprise Resource Planning systems will not have this capability in fiscal year 2015. Also, the task force’s estimates of financial benefits assume IUID implementation across each of DOD’s components. However, as discussed in more detail later, the Navy, Air Force, and Defense Logistics Agency are currently not carrying out key IUID implementation efforts. For example, the Navy is not systematically marking legacy items and the potential integration of IUID with its Enterprise Resource Planning system for Supply is unfunded. In addition, the Air Force is not actively integrating IUID into its Enterprise Resource Planning system. Also, the Defense Logistics Agency is not marking legacy items. The task force’s estimate of financial benefits does not consider that some benefits may not be achieved as a result of DOD’s partial implementation of IUID. Without a sensitivity analysis of its cost and benefit estimates, the task force report does not provide DOD leaders with information about how well the estimates may hold up under reasonable changes to the assumptions on which the estimates are based. DOD components and contractors have been marking items with IUID, but due to several challenges, it is difficult for DOD to assess its progress in marking items or ensuring that contractors are sufficiently marking items. DOD components have reported marking more than 2 million legacy items, and DOD has identified tens of millions of legacy items that meet its IUID marking criteria. But, DOD does not have complete information on the total number of legacy items that its components have marked and must mark in the future. Moreover, DOD has not developed a full set of quantifiable goals to assess its progress in marking these items. Further, DOD has not set interim milestones to determine the components’ progress in marking items, and DOD’s components do not use consistent criteria to track progress in legacy item marking. With respect to newly-acquired items and pieces of government-furnished property, DOD reports that as of January 2012, more than 2,500 contractors had delivered newly-acquired items to DOD and had registered over 11.5 million such items and pieces of government- furnished property in DOD’s IUID Registry. However, DOD cannot ensure that contractors are sufficiently marking all of the items that require IUID labels, for two reasons: DOD reporting requirements do not provide assurance that appropriate marking clauses are included in all contracts, and DOD components do not have systematic processes to assess the sufficiency of IUID data matrices. As a result, DOD may be unable to ensure that contractors are marking all newly-acquired items and pieces of government-furnished property that require IUID labels, and DOD cannot know the extent to which contractors are supplying IUID data matrices that the components need to track items with IUID technology. DOD has made some progress in marking legacy items. In late 2004, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics established the requirement for DOD components to mark legacy items with IUID labels and for the military services to develop plans to accomplish such marking. An Army depot began marking legacy items in 2005, and according to information from DOD’s IUID Registry, DOD components began registering legacy items in 2006. As of October 2011, the components reported marking more than 2 million legacy items. However, DOD does not have complete information on the total number of items to be marked and that have been marked; DOD has not quantifiably defined its marking goals according to DOD’s IUID marking criteria; DOD has not set interim milestones to determine the components’ progress in marking items; and DOD’s components do not use consistent criteria to track progress in legacy item marking. The components provided us with estimates of the total number of items that must be marked in the future, according to DOD’s IUID marking criteria. According to the components, the total number of items in their inventories that meet DOD’s IUID marking criteria is about 122 million. In addition, the components provided us with estimates of the total number of legacy items they have marked, if estimates were available. As of October 2011, the Army, Marine Corps, and Air Force reported that they had marked a total of about 2.7 million legacy items. The Navy and Defense Logistics Agency did not report the marking of legacy items. The Army reported that its inventory contains about 15 million items that meet DOD’s IUID marking criteria, and that it had marked about 1.2 million legacy items. The Marine Corps reported that its inventory contains about 3.1 million items that meet DOD’s IUID marking criteria, and that it had marked about 0.3 million legacy items. The Air Force reported that its inventory contains about 13.3 million items that meet DOD’s IUID marking criteria, and that it had marked about 1.2 million legacy items. The Navy reported that its inventory contains about 60.6 million items that meet DOD’s IUID marking criteria. However, the Navy could not provide an estimate of the number of legacy items it had marked. According to a Navy official, there are “pockets of compliance” within the Navy, in which certain organizations had marked legacy items with IUID labels. But the Navy does not have a Navy-wide, systematic plan or approach to legacy marking; it does not track the number of legacy items being marked within these pockets; and it characterized its progress in legacy marking as “minimal.” The Defense Logistics Agency reported that its inventory contains about 30.0 million items that meet DOD’s IUID marking criteria. However, according to the agency, it is not currently marking legacy items; does not have the capability or required technical information to mark the legacy items in its inventory; and does not plan to mark legacy items in the future. For example, the agency does not have marking equipment. In addition, agency officials explained that the agency lacks information on how to appropriately mark legacy items with IUID labels. According to the officials, the other components must provide this information to the agency, because the components manage the items that the agency stores in its inventory. With regard to the total number of items to be marked in the future, some component officials stated that their estimates are incomplete. For example, Army officials explained that their estimate does not include certain classified items because the system they used to estimate the Army’s legacy item inventory does not interface with systems that track those classified items. Navy officials stated that their estimate does not include items that are embedded in other items—such as a circuit board inside of an aircraft—because of system limitations and the time it would have taken to include these items in the Navy’s estimate. Further, Defense Logistics Agency officials explained that their estimate does not include items that are classified by the components as serially managed. Moreover, the components’ estimates of legacy items to be marked in the future do not match the estimate of the total number of legacy items to be marked according to the task force’s report. According to the report, DOD has a total of about 60 million legacy items to be marked. However, the components report that they must mark a total of about 122 million legacy items, and the Navy alone estimates it has 60.6 million legacy items to mark in the future. Because the task force’s estimate is 49 percent smaller than the components’ estimates, DOD may not have complete information on the total number of legacy items in its inventory that meet IUID marking criteria, and that it must mark in the future. As stated above, the components reported to us that they had marked about 2.7 million legacy items. However, information from the DOD IUID Registry indicates that about 4.9 million legacy items were registered as As previously discussed, an item’s UII may be entered of October 2011.into the registry in one of two ways. First, the item can be marked with an IUID label, and the UII associated with that label is registered. Second, DOD or contractors can establish a virtual UII, registering an item before it is eventually marked with a label. Because the registry’s estimate of legacy items registered is 45 percent larger than the components’ estimate, it is unclear how many legacy items that DOD has marked. Table 2 summarizes the components’ estimates of the total number of legacy items in their inventories that meet DOD’s IUID marking criteria; the task force’s estimate of the total number of legacy items in the components’ inventories that meet DOD’s IUID marking criteria; the components’ estimates of the total number of legacy items they have marked; and the total number of legacy items recorded in the DOD IUID Registry. An agency’s establishment of goals is a key internal control, and we have previously reported that in the absence of quantifiable targets, it is difficult for officials to assess whether goals were achieved, because comparisons cannot be made between projected performance and actual results. In 2010, the task force recommended that DOD focus its legacy marking efforts on those items from which DOD could derive the greatest benefit. According to an ODASD(SCI) official, it is DOD’s goal to complete the marking of the “majority” of these legacy items by the end of fiscal year 2015. In addition, according to the task force’s report, DOD’s marking of a “significant” number of legacy items is one of the keys to realizing the potential financial benefits of IUID implementation. We have previously reported that, where appropriate, to more easily assess agency progress, performance goals should have quantifiable, numerical targets.DOD’s goals of marking a “majority” of legacy items, or a “significant” number of legacy items, respectively. Further, while some DOD components have quantifiable goals for marking certain legacy items, others do not. As previously discussed, DOD has established IUID marking criteria for different categories of inventory, such as certain items with a unit acquisition cost of $5,000 or greater; certain items that are serially managed; certain items that are controlled (sensitive or classified); and certain depot-level reparable items. We found that some components had set quantifiable goals using these criteria. For instance, the Marine Corps has quantifiable goals for each of the categories defined by DOD’s IUID marking criteria. The Army has quantifiable goals for each of these categories except for items that are controlled (sensitive or classified). Neither the Navy nor the Air Force have established quantifiable goals defined by DOD’s IUID marking criteria. Once quantifiable goals have been established, we have previously reported that metrics for a program’s main efforts—such as interim milestones and schedules—give decision makers the information needed to assess progress and estimate realistic completion dates. As previously discussed, we have noted that quantifiable target—such as interim milestones—can assist organizations in tracking progress toward their goals. For example, quantifiable interim milestones could assist DOD in evaluating whether the components are marking legacy items at a rate that will allow DOD to meet its fiscal year 2015 goal of marking a “majority” or “significant” number of legacy items in its inventory. As of January 2012, ODASD(SCI), the task force, and the components had not fully developed quantifiable interim milestones to track progress toward DOD’s goals for marking legacy items. Neither ODASD(SCI) nor the task force had set interim milestones for the number of items that should be marked in fiscal year 2015, DOD’s target date for marking a “majority” of legacy items and a “significant” number of legacy items. Further, while some components have set interim milestones for tracking their progress in marking certain legacy items, others have not. For instance, the Air Force has established interim milestones for the marking but it is still working of some legacy items, such as class VII equipment,on developing interim milestones for others, such as class II items. The Marine Corps has established a goal of completing the marking of about 34 percent of its legacy items by December 2012. However, it has established no interim milestones. Similarly, neither the Navy nor the Defense Logistics Agency has developed interim milestones. In January 2012, ODASD(SCI) provided us with an IUID timeline containing targets that are not quantified. The IUID timeline indicates that ODASD(SCI) expects the components to be marking legacy items at least through fiscal year 2017. In addition, ODASD(SCI)’s IUID timeline lays out fiscal year 2012 targets for the components to determine the number of items they need to mark, and fiscal year 2013 targets for the components to report on their legacy marking progress to ODASD(SCI). According to ODASD(SCI) officials, the office intends for the components to use the Federal Logistics Information System to eventually provide quantifiable reports on their legacy marking efforts.DASD(SCI) stated that he has asked the components to develop quantifiable interim milestones for legacy marking. The components’ use of the system may provide them with a means to report progress in regard to these milestones. However, ODASD(SCI) officials stated that the system does not yet have this capability and they do not yet have an estimate for when the components could begin their reporting. Until ODASD(SCI) and the components begin to use such milestones to assess DOD’s progress in marking legacy items, it is difficult to know whether DOD’s current number of marked legacy items represents what DOD intended to achieve, almost 7 years after DOD established the requirement to mark legacy items. GAO/GGD-96-118. milestones, and the tracking of progress with categories defined by DOD’s IUID marking criteria is summarized in table 3. While legacy items are marked by DOD, newly-acquired items and government-furnished property must be marked by DOD’s contractors. DOD has made some progress in ensuring that these types of items are marked by contractors. According to the DOD IUID Registry, more than 2,500 contractors had registered over 11.5 million newly-acquired items in the registry as of January 2012. DOD plans to eventually finish marking its legacy items, but contractors will continue to mark items that are acquired by DOD, or provided by DOD to contractors, and meet its IUID marking criteria. According to January 2012 data from the DOD IUID Registry, the number of newly-acquired items and government-furnished property already exceeds the number of legacy items. If current marking trends continue, the ratio of these items to legacy items will continue to increase, and newly-acquired items and government-furnished property will continue to make up the majority of DOD’s inventory of IUID-labeled items. For this reason, the future success of DOD’s IUID implementation efforts depends on having contractors sufficiently mark newly-acquired items and government-furnished property with IUID labels. While DOD has made some progress, it cannot currently ensure that contractors are sufficiently marking all of the items that require IUID labels, for two reasons: reporting requirements do not provide assurance that appropriate IUID-marking contract clauses are included, and DOD’s inspection efforts are not systematic. Without adequate reporting requirements regarding the components’ insertion of IUID clauses into applicable contracts, DOD cannot know the extent to which it is requiring contractors to mark all items that should have IUID labels. And, without sufficient inspection of IUID data matrices, DOD cannot know the extent to which contractors are supplying deficient data matrices. In order for DOD to use IUID technology to track contractor-marked items that qualify for IUID marking, DOD and its contractors must take certain steps to help ensure that qualified items are marked, and that the IUID marks are usable. These steps are as follows: in contracts for qualified items, the components must ensure that the contracts contain appropriate contract clauses and that those clauses are correctly completed. These clauses require the contractor to mark or register qualified items; the contractor must then mark items with an IUID label with a data matrix that can be read electronically or establish a virtual UII in some cases; and the data matrix must contain the necessary information, organized with the proper data elements and syntax, which must be registered in DOD’s IUID Registry. Defense Federal Acquisition Regulation Supplement section 252.211-7003. label and whether that label’s data matrix is formatted according to DOD- wide standards. Officials from several components told us that they had inspected some newly-acquired items to determine whether these items were sufficiently marked with IUID labels. For the items they reviewed, those inspections helped to detect problems in contractors’ marking of items. Also, since 2009, the Defense Contract Management Agency has had a surveillance program in place to inspect newly-acquired items and assess contractors’ compliance with the acquired-items clause. A DOD memorandum requires that the components report to the Office of Defense Procurement and Acquisition Policy on only a portion of contracts that should include the clauses related to IUID. According to the memorandum, components are required to report on, among other things, whether the acquired-items clause is present in contracts for newly-acquired items. However, they are not required to report on whether the government-furnished property clause is present in contracts involving government-furnished property. Several components are thus reporting only on whether they are including the acquired-items clause in contracts. While the Air Force reports on whether it is including both the acquired-items clause and the government-property clause in contracts, the Army, Navy, Marine Corps, and Defense Logistics Agency report only on whether they are including the acquired-items clause. While certain components are reviewing contracts for items that cost more than $5,000 or are serially managed, none of the components are reviewing contracts for items that meet other DOD IUID marking criteria, such as those for controlled (sensitive or classified) items. Consequently, DOD does not know the full extent to which the components are complying with the requirements to include IUID-related clauses in contracts. Without this information, DOD may be unable to ensure that contractors are marking all newly-acquired items and pieces of government-furnished property that require IUID labels. According to DOD officials, it is a good practice for either a DOD component or the Defense Contract Management Agency to inspect items to ensure that contractors have marked items with IUID labels, and that the labels’ data matrices are not deficient. Inspection of newly- acquired items is important because contractors have been delivering IUID labels with deficient data matrices that cannot be used by DOD. However, neither the components nor the Defense Contract Management Agency have been systematically inspecting the data matrices in IUID labels applied to items by contractors. According to both the Marine Corps and the Air Force, more than 10 percent of the newly-acquired items’ data matrices they examined after receipt of the items from the contractor have been deficient. For example, one Marine Corps installation reported that from January 2010 through October 2011, there were 8 months in which more than 10 percent of newly-acquired items provided by contractors were marked with deficient data matrices. Problems include having matrices in which the syntax of the UII data was incorrect or missing key elements; matrices that could not be electronically read; and matrices that contained a UII number that had not been registered in DOD’s IUID Registry. Although the Marine Corps and Air Force have assessed a portion of the data matrices on newly-acquired items in their inventories, these components are not systematically assessing whether contractors are sufficiently marking these items. For example, the Marine Corps estimates that it has assessed the sufficiency of contractors’ marking for about 79 percent of all newly-acquired items’ data matrices in its inventory. Officials from both the Marine Corps and Air Force explained, however, that neither has developed a systematic approach for inspecting these items’ data matrices. According to Army officials, the Army also lacks a systematic approach to inspecting these items’ data matrices. For example, the two depots identified by the Army as furthest along in IUID implementation have not established a policy or set procedures for assessing the sufficiency of the data matrices of newly-acquired items in their inventories. Defense Logistics Agency officials explained that although its personnel do perform various types of inspection and acceptance procedures on items delivered to its sites, personnel are not inspecting items’ data matrices. In addition, the Navy does not have a policy or plans in place to systematically assess newly-acquired items’ data matrices. According to the Defense Contract Management Agency’s information memorandum that describes procedures for inspecting contractors’ data matrices, the agency’s inspectors are to verify the readability of these matrices if a scanner for reading matrices is available at the inspection site. However, as previously discussed, the only way to assess the functionality of a data matrix is to use a tool that can electronically read the matrix. Because its memorandum does not require inspectors to electronically read data matrices in all cases, the Defense Contract Management Agency cannot ensure that the items it inspects have an IUID label with a data matrix that can be properly read; that the data matrix contains the necessary UII data, organized with the proper syntax; and that the item’s UII number is registered in DOD’s IUID Registry. Furthermore, officials from the Defense Contract Management Agency said their inspectors rely on contractors to provide the technology to electronically read data matrices. If contractors do not provide this technology, inspectors at those manufacturing sites cannot electronically read and verify the sufficiency of data matrices. The Office of Defense Procurement and Acquisition Policy and the components have taken steps to address some of these challenges. For example, in February 2011, the office issued a DOD standard operating procedure for assessing the sufficiency of data matrices. Also, through the Product Quality Deficiency Report process, Marine Corps item managers are beginning to work with contractors to address the contractors’ delivery of deficient data matrices. In addition, in the first quarter of fiscal year 2012, the Air Force began to track the number of deficient data matrices it is discovering as it assesses the sufficiency of newly-acquired items’ data matrices. However, unless all of the components and the Defense Contract Management Agency are systematically assessing whether contractors are sufficiently marking newly-acquired items, DOD cannot know the full extent to which contractors are supplying deficient data matrices. DOD has made some progress in developing a capability to share UII data enterprisewide and integrating IUID functionality with its Enterprise Resource Planning systems. For example, DOD is in the process of revising key guidance on using IUID technology and UII data across DOD, and three components are temporarily storing UII data until they are ready to use these data in their Enterprise Resource Planning systems. However, DOD faces several challenges in sharing UII data enterprisewide, and it is unlikely that it will meet a fiscal year 2015 goal to use UII data for the management of items in enterprise information systems. Further, DOD cannot reliably predict when it will meet this goal, because ODASD(SCI) and the components have not fully scheduled for integrating IUID functionality with the IT systems through which the components plan to achieve this capability, their Enterprise Resource Planning systems. It is DOD’s goal for its components to share UII data departmentwide, and the components are to use these data for unique item tracking. According to a DOD instruction,Acquisition Policy is to ensure unique IUID identifiers are established to enable items to be tracked and traced throughout their life cycle in acquisition and logistics business processes and systems, in an integrated approach across DOD. Further, a 2011 IUID implementation schedule from ODASD(SCI) states that certain DOD item management processes are to be using UII data to manage items by the end of fiscal year 2015. Specifically, according to the schedule, by fiscal year 2015, two categories of logistics processes—intensive item management and product life cycle management—are to use IUID technology. the Director for Defense Procurement and As previously discussed, the task force report assessed the potential benefits of using IUID technology in these logistics processes. In both cases, the report explains that sharing UII data across DOD is key to realizing the full benefits of these processes. Regarding intensive item management, the report states it is clear that DOD requires an enterprisewide approach to managing critical items and the largest benefits of managing items intensively would be achieved by using UII data across the enterprise. With regard to product life cycle management, the task force report estimates that DOD could achieve substantial financial benefits through the use of UII data in this process. In our previous discussion, we explained that the task force report’s methodology for estimating these financial benefits may not be appropriate. However, if DOD is to achieve potential benefits of product life cycle management, the report explains that benefits would come primarily through analysis of UII data, and that DOD should expect to see the full benefit of this analysis as its Enterprise Resource Planning systems begin sharing and using these data. As previously mentioned, the report states that DOD could begin to achieve the net financial benefits of IUID implementation in fiscal year 2017. However, in order to do so, the report assumes that DOD will have the capability to share and use UII data, enterprisewide, by fiscal year 2015. Since DOD began its IUID technology implementation efforts in fiscal year 2004, it has made some progress in preparing to share UII data enterprisewide through its Enterprise Resource Planning systems, in two main areas. First, DOD is in the process of modifying its supply chain management policy and guidance to incorporate use of IUID technology and UII data across DOD. Second, three of the components have developed IT systems to temporarily store data from IUID-labeled items until these data can be uploaded into Enterprise Resource Planning systems. In January 2012, ODASD(SCI) provided us with sections from a draft revision to the regulation that establishes DOD’s supply chain management processes and procedures, including sections pertaining to IUID. The draft sections we reviewed define standards and procedures for using IUID in DOD Enterprise Resource Planning systems. If implemented, the revisions would likely help DOD move forward in its integration of IUID technology and UII data with its IT systems—including its Enterprise Resource Planning systems—in two ways. First, the draft revisions would establish standards for acceptable electronic scanners, which should help ensure interoperability across DOD organizations that are scanning and uploading UII data from IUID labels’ data matrices. Second, the draft revisions would require DOD organizations to update their UII data-sharing capabilities by adopting a system to share UII data, such as the Defense Logistics Management Standards; according to DOD officials, this system is replacing an older one that is unable to share UII data.for that reason essential to IUID implementation. Because their Enterprise Resource Planning systems are not currently capable of accepting or storing UII data at a componentwide level, the Army, Marine Corps, and Air Force have developed IT systems to temporarily store UII data generated by the labeling of both legacy and newly-acquired items. However, these systems have limited capabilities to manage or use UII data. For example, these temporary systems are not capable of sharing UII data within or between components. As of January 2012, the Air Force’s temporary system was limited to use on individual computer workstations, and could not send or receive UII data from other Air Force or DOD computers. In addition, the Marine Corps used its temporary system to provide us with information on its inventory of IUID-labeled legacy items, but the system was not designed to perform more complex tasks such as analyzing UII data in support of product life cycle management processes. DOD faces three challenges in sharing UII data enterprisewide and integrating IUID functionality with its Enterprise Resource Planning systems. First, ODASD(SCI) and the components have not fully defined the requirements for using UII data across DOD, or within the components’ Enterprise Resource Planning systems. Second, as of April 2012, the Air Force and the Navy were not actively integrating IUID with their Enterprise Resource Planning systems. And third, ODASD(SCI) and the components have not fully scheduled for integration of IUID functionality with their Enterprise Resource Planning systems. As a result, DOD is unlikely to meet its fiscal year 2015 goal to use UII data in intensive item management and product life cycle management. Officials from the Army, Marine Corps, Air Force, and Defense Logistics Agency said that their components had not yet fully defined the component-specific UII requirements for their respective Enterprise Resource Planning systems. A Marine Corps official stated that, as of January 2012, requirements for how the Marine Corps system will interface with scanners were in draft form. Officials from the Defense Logistics Agency explained that because the agency manages items on the basis of the requirements of the other components, it could not finalize the business rules for using UII data in its system until the other components had determined their requirements. According to DOD officials, it is unclear when the requirements or related business rules will be fully defined and, until they are defined, the components cannot complete their integration of IUID technology with their IT systems, including their Enterprise Resource Planning systems. As of April 2012, the Air Force and the Navy were not actively integrating IUID with their Enterprise Resource Planning systems. According to Air Force officials, because of cost overruns and delays in the development of its Enterprise Resource Planning system, the Office of the Secretary of Defense and the Air Force are planning to evaluate alternatives to the system. Because this system is central to the Air Force’s IUID implementation efforts and the Air Force does not know when a decision will be made, officials stated that they cannot estimate when—or whether—the system will be ready to share and use UII data. According to Air Force officials, the Air Force has a data network that provides the capability to share UII data within the Air Force, between certain IT systems. However, they stated that this data network does not currently have the capability to share UII data with other components, across DOD. Further, the officials stated that the Air Force’s plan is to eventually share UII data enterprisewide through its Enterprise Resource Planning system. In October 2011, senior Navy officials stated that the Navy had no plans to integrate IUID with its Enterprise Resource Planning system for Supply, and will not be ready to share or use UII across its systems—or with other components’ systems—by the end of fiscal year 2015. Further, they explained that the Navy was not actively integrating IUID with its Enterprise Resource Planning system for Supply because strict budget conditions compelled Navy leadership to allocate funds to programs the Navy considered to be of higher priority than IUID implementation. As of April 2012, the Navy stated that proposed integration efforts remained unfunded. Because neither ODASD(SCI) nor the components have complete integrated master schedules for the integration of IUID functionality with their Enterprise Resource Planning systems, DOD cannot reliably predict when it will be able to use these systems to meet its fiscal year 2015 goal to use UII data in intensive item management and product life cycle management. A key internal control is the use of performance measures, and we have previously reported that such metrics for a program’s main efforts—including interim milestones and schedules—give decision makers the information needed to assess progress and estimate realistic completion dates. In addition, we have reported that a reliable schedule—such as an integrated master schedule—is crucial to estimating the overall timeline and cost of IT programs, including Enterprise Resource Planning systems. An integrated master schedule is the time-phased schedule DOD and other agencies use for assessing technical performance. It contains the detailed tasks or work packages necessary to ensure program execution. Further, we have reported that without fully integrating the distinct activities that make up an IT program with such a schedule, an organization will not be able to measure its progress toward completion and cannot be held accountable for results. Although the Army has integrated master schedules for its two Enterprise Resource Planning systems, these schedules lack key elements, such as distinct activities for IUID integration and detailed processes for transmission of UII data across the systems. The Air Force has an integrated master schedule for its Enterprise Resource Planning system, and this schedule has distinct activities for IUID integration. However, as previously discussed, the Office of the Secretary of Defense and the Air Force are in the process of evaluating whether to modify, or cancel and replace, the Air Force’s Enterprise Resource Planning system. According to Air Force officials, the system’s current schedule will need to be revised once the future of the system has been determined. The Marine Corps has an integrated master schedule for its Enterprise Resource Planning system, and Marine Corps officials have stated that they plan to amend the schedule to include distinct IUID activities. However, as of January 2012, it did not contain them. As discussed previously, the Navy is not actively integrating IUID with its Enterprise Resource Planning system for Supply and does not have an integrated master schedule for integrating IUID with its Enterprise Resource Planning system. ODASD(SCI) has produced an IUID timeline that contains general targets for the fielding of IUID-capable Enterprise Resource Planning systems. However, an ODASD(SCI) official stated that it does not have an integrated master schedule to coordinate or track the progress of the components’ efforts to integrate IUID with their Enterprise Resource Planning systems. Officials from ODASD(SCI) and several components have reported that they are unsure of when the components’ Enterprise Resource Planning systems will be able to share UII data within their networks, a key capability for both intensive item management and product life cycle management. Given the challenges ODASD(SCI) and the components face in sharing UII data enterprisewide and integrating IUID with their Enterprise Resource Planning systems, DOD likely will face difficulties in meeting its IUID integration goals. Without fully defined requirements; resolving the challenges posed by the Air Force and the Navy not actively integrating IUID with their Enterprise Resource Planning systems; and an integrated master schedule that includes IUID integration at the component level and at the DOD-wide level, DOD cannot reliably predict whether it will meet its goal to use these systems to manage items through intensive item management and product life cycle management by the end of fiscal year 2015, or predict when these systems will have this capability. Using UII data could enable DOD to improve accountability and management of equipment and materiel, and increase efficiencies in maintenance, which could potentially result in cost savings in some cases. DOD is in the process of developing a framework for managing and implementing IUID technology, but could benefit from fully implementing best management practices that would enable the department to better determine the costs and benefits of IUID implementation, and progress toward goals. DOD components have reported marking more than 2 million legacy items, and DOD has made some progress in ensuring that its contractors are marking newly- acquired items with IUID. As of January 2012, though, DOD’s implementation of IUID technology faces several substantial challenges. For example, while the components report marking more than 2 million items, DOD does not have quantifiable goals or interim milestones that it can use to assess progress in achieving its fiscal year 2015 goal of marking a “majority” of legacy items, or the task force’s goal of marking a “significant” number of legacy items. With regard to items that must be marked by contractors, in the absence of policies and procedures that establish a systematic process for assessing the sufficiency of contractor- supplied data matrices, DOD is unable to determine the extent to which contractors are sufficiently marking items. This limits DOD’s ability to ensure that it can track those items. Also, DOD has not fully developed the schedules needed to integrate IUID with existing IT systems, so that DOD can share UII data enterprisewide. This impedes its successful integration of IUID technology with these systems by the end of fiscal year 2015, its stated goal, and prevents the department from determining when it might achieve this integration. At a time when the nation faces fiscal challenges, and defense budgets are becoming tighter, DOD leaders’ lack of key information on IUID implementation could hinder sound program management and decision making. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to complete its implementation and management framework for IUID by incorporating key elements of a comprehensive management approach, such as a complete analysis of the return on investment, quantitatively-defined goals, and metrics for measuring progress. To do so, we recommend that the Secretary of Defense direct the following organizations to take six actions: The Under Secretary of Defense for Acquisition, Technology, and Logistics to update the IUID task force report’s estimates of costs and benefits by incorporating key elements of a sound investment analysis including a more complete estimate of all associated costs, an appropriate methodology for estimating benefits, and a sensitivity analysis of these estimates. The Under Secretary of Defense for Acquisition, Technology, and Logistics, in coordination with the components, develop quantitatively- defined goals for the number of legacy items that may allow DOD to achieve the Task Force’s estimate of IUID’s potential benefits, by marking a “significant” number of these legacy items, or meet ODASD(SCI)’s goal that DOD needs to mark a “majority” of these legacy items by fiscal year 2015. The Under Secretary of Defense for Acquisition, Technology, and Logistics, in coordination with the components, establish quantifiable interim milestones for marking legacy items that allow DOD to track progress toward its goals. The Under Secretary of Defense for Acquisition, Technology, and Logistics, in coordination with the components, track progress using a consistent set of criteria, such as DOD’s IUID marking criteria. The components and the Defense Contract Management Agency develop policies and procedures that provide for systematic assessment of the sufficiency of contractor-marked items’ data matrices. The Under Secretary of Defense for Acquisition, Technology, and Logistics require the components to examine and report to the Office of Defense Procurement and Acquisition Policy on all types of contracts that should include the acquired-items and government- property clauses. In addition, to enable DOD to successfully share UII data enterprisewide and integrate IUID functionality with its Enterprise Resource Planning systems, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to coordinate with the military services and the Defense Logistics Agency to take the following two actions: Define the requirements for using UII data across DOD and within the components’ Enterprise Resource Planning systems. Develop or revise integrated master schedules for the integration of IUID technology with the components’ individual Enterprise Resource Planning systems—and between these systems—across DOD. These schedules should fully integrate distinct IUID activities. We also recommend that the Secretary of Defense direct the Secretary of the Navy to develop a plan to share UII data enterprisewide. In commenting on a draft of this report, DOD concurred with eight recommendations and partially concurred with one recommendation. DOD’s comments are reprinted in appendix III. DOD also provided technical comments, which we considered and incorporated where appropriate. DOD concurred with our recommendation to develop quantitatively- defined goals for the number of legacy items that may allow DOD to achieve the task force’s estimate of IUID’s potential benefits. DOD stated that an IUID working group will identify the target population of items that qualify for IUID marking in a list of the items’ National Stock Numbers and DOD will track progress in its marking of individual items on this list according to component IUID implementation plans that are due to be submitted to the Assistant Secretary of Defense for Logistics and Materiel Readiness by September 2012. DOD concurred with our recommendation to establish quantifiable interim milestones for marking legacy items that allow DOD to track progress toward its goals. DOD stated that its IUID working group will establish interim milestones to track the progress of marking legacy assets as part of the development of component IUID implementation plans to be submitted to the Assistant Secretary of Defense for Logistics and Materiel Readiness by September 2012. DOD concurred with our recommendation to track progress using a consistent set of criteria. DOD stated that progress will be tracked using a consistent set of criteria, once developed by the IUID working group. DOD concurred with our recommendation to develop policies and procedures that provide for systematic assessment of the sufficiency of contractor-marked items’ data matrices. DOD stated that the Defense Contract Management Agency has risk-based assessment policies and procedures in place. According to DOD, these include a review of contracts to determine whether they contain an IUID requirement; surveillance of a contractor’s IUID marking; and an IUID checklist that requires agency personnel to examine an item’s data matrix. DOD explained that agency personnel assess the sufficiency of a data matrix by electronically reading it with a scanner supplied by a contractor or through a statement of quality from contractors that the agency has determined have adequate quality control. We believe that DOD’s concurrence with our recommendation may lead to the components improving their capability to systematically assess these matrices, and that the agency’s policies and procedures may assist its inspectors in doing the same. However, our review of the policies and procedures provided by the agency indicate that it does not require inspectors to assess the sufficiency of data matrices in all cases. For example, if a contractor does not provide evidence that it has marked items with sufficient data matrices, and no IUID scanner is available on site, neither the agency’s 2009 information memorandum describing procedures for inspecting contractors’ data matrices, nor its IUID checklist, provide an alternative method for inspectors to assess the sufficiency of items’ matrices. Because of this, we continue to believe that the agency cannot ensure that the items it inspects have IUID labels with sufficient data matrices, and that it should continue to develop policies and procedures that provide for systematic assessment of the sufficiency of contractor- marked items’ data matrices. DOD concurred with our recommendation for the components to examine and report to the Office of Defense Procurement and Acquisition Policy on all types of contracts that should include the acquired-items and government-property clauses. DOD stated that the components will provide contract evaluation reports—for items meeting any of DOD’s IUID criteria, as well as pieces of government-furnished property that meet these criteria—and report on compliance with the requirements to include the appropriate Defense Federal Acquisition Regulation Supplement IUID clauses in contracts. DOD concurred with our recommendation to define the requirements for using UII data across DOD and within the components’ Enterprise Resource Planning systems. DOD stated that the IUID working group will define DOD-wide IUID functional requirements. DOD concurred with our recommendation to develop or revise integrated master schedules for the integration of IUID technology with the components’ individual Enterprise Resource Planning systems—and between these systems—across the department. DOD stated that the components have been tasked to submit revised IUID implementation plans to the Assistant Secretary of Defense for Logistics and Materiel Readiness by September 2012. We believe that such plans could assist DOD in improving its management approach for the implementation of IUID. However, as previously discussed, an integrated master schedule has specific characteristics that make it distinct from an implementation plan. Specifically, an integrated master schedule is a time-phased schedule that DOD and other agencies use for assessing technical performance. It contains the detailed tasks or work packages necessary to ensure program execution. Further, we have reported that without fully integrating the distinct activities that make up an IT program with such a schedule, an organization will not be able to measure its progress toward completion and cannot be held accountable for results. Because of this, we continue to believe that integrated master schedules should be developed or revised for the integration of IUID technology with the components’ Enterprise Resource Planning Systems. DOD concurred with our recommendation that the Secretary of the Navy develop a plan to share UII data enterprisewide. DOD stated that the Navy—participating in the IUID working group—will develop IUID requirements as part of the working group’s definition of DOD-wide IUID functional requirements. Regarding the specific requirement for IUID functionality in the Navy’s Enterprise Resource Planning system—Navy Enterprise Resource Planning system for Supply—DOD stated that the Assistant Secretary of Defense for Logistics and Materiel Readiness will continue to work with the Chief of Naval Operations (Deputy Chief of Naval Operations ) to develop a plan to include IUID requirements in this system. DOD partially concurred with our recommendation to update the IUID task force report’s estimates of costs and benefits by incorporating key elements of a sound investment analysis, including a more complete estimate of all associated costs, an appropriate methodology for estimating benefits, and a sensitivity analysis of these estimates. DOD stated that the benefits to DOD of implementing IUID marking are to improve asset accountability, tracking, and the life cycle management of targeted items. Further, DOD stated that it will continue to identify costs of implementing IUID as IUID is implemented across DOD. As previously discussed, a best practice for analyzing a program’s return on investment is the estimation of all potential costs, and DOD efforts to continue to identify costs of IUID implementation may be a positive step in this direction. According to DOD, another best practice for analyzing a program’s return on investment is analyzing benefits, and making recommendations, based on relevant evaluation criteria. DOD has estimated that IUID implementation could cost $3.2 billion, and the components report that they have already spent at least $219 million on implementation efforts. Moreover, DOD has estimated that implementing IUID technology could save $3 billion to $5 billion per year. As previously discussed, DOD may have used a methodology for estimating these benefits that may not be appropriate to the scale and complexity of DOD’s IUID implementation efforts. For example, the task force estimated DOD- wide savings on the basis of a limited number of case studies, these case studies did not address programs that use IUID as the technology that provides a unique identifier to track items through serialized item management, and even when a logistics program experiences cost savings after introducing a new technology or process, it can be difficult to link the savings directly to a specific cause or technology such as IUID. Given IUID’s potential costs and that DOD’s methodology for estimating IUID’s potential financial benefits may not be appropriate, we continue to believe that an estimate of both IUID’s costs and benefits, based on an appropriate methodology, and a sensitivity analysis of these estimates, would provide DOD leaders with key information to better enable sound program management and determine whether continued spending on IUID is likely to result in a significant return on investment. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Directors of the Defense Logistics Agency and the Defense Contract Management Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or merrittz@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) has a comprehensive management approach for its implementation of item unique identification (IUID), we reviewed previously published DOD and GAO work to identify best practices; the IUID implementation framework documentation provided by the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration (ODASD); the DOD IUID task force’s analysis of the potential costs and benefits of IUID implementation; and the components’ estimates of historical spending and fiscal year 2012 budget requests for IUID implementation. To determine the extent to which DOD components have marked legacy items with IUID, we reviewed DOD criteria for the types of items to be marked with IUID labels; reviewed DOD’s plans for marking legacy items; and gathered data on the number and type of legacy items in components’ inventories, and how many had been marked as of October 2011. To determine the extent to which DOD has taken steps to ensure that items are sufficiently marked by contractors with IUID, we reviewed the Defense Federal Acquisition Regulation Supplement clauses that require contractors to mark items with labels—or establish virtual unique item identifiers (UII)—and register the items, and we gathered data from the components on the quality of contractors’ IUID data matrices. To determine the extent to which DOD has integrated IUID with its enterprise information systems, we reviewed our previously published work on best practices for the planning of large-scale information- reviewed DOD-wide and component-level policy on technology efforts;the use of UII data in its information technology systems, which include DOD Enterprise Resource Planning systems; reviewed DOD-wide and component-level integrated master schedules for the integration of IUID technology with these systems, if available; and reviewed other types of existing schedules and system planning documents. In addition, we visited selected sites to observe key IUID activities. To select the sites we used a nongeneralizable, judgmental sample based on a number of criteria, including DOD component and the type of IUID activity performed at the site. For all our objectives, we interviewed officials knowledgeable about DOD’s IUID implementation efforts, including officials from ODASD(SCI), as well as other officials from the Office of the Secretary of Defense, the components, and the Defense Contract Management Agency. We assessed the reliability of all computer-generated data provided by DOD for each of our objectives by reviewing existing information about the data and the systems that produced the data and by interviewing agency officials knowledgeable about the data to determine the steps taken to ensure the accuracy and completeness of the data. In the course of our assessment, we reviewed estimates provided by the components on the number of legacy items marked and to be marked; deficient data matrices; and historical and requested IUID spending. Each of the components provided estimates on either the number of legacy items marked, or to be marked; several of the components provided estimates of both. On the basis of our review of the sources and methodology used by the Army, Marine Corps, and Air Force to produce estimates of the number of legacy items they have marked, we determined that these data are sufficiently reliable for the purposes of reporting the components’ best estimates of the size of this population of items. The Navy was not able to estimate the number of legacy items it has marked; the Defense Logistics Agency reported that it has not marked legacy items. Based on our review of the sources and methodology used by the Marine Corps and Air Force to produce estimates of the number of legacy items they must mark in the future, we determined that these data are sufficiently reliable for the purposes of reporting on the components’ best estimates of the size of this population of items. As previously discussed, the Army, Navy, and Defense Logistics Agency explained that their estimates on the number of legacy items that must be marked in the future are not complete. Although not complete, we determined that the data on legacy items represent the components’ best estimates, and are sufficiently reliable for the purposes of reporting on the general size of the population of legacy items they must mark in the future. The Army, Navy, and Defense Logistics Agency did not provide data on deficient data matrices; the Marine Corps and Air Force did provide these data. We reviewed the data sources and methodology used by the Marine Corps and Air Force to produce data on the number of deficient data matrices they have discovered through their review of a portion of data matrices on newly-acquired items in their inventories. We determined that these data are sufficiently reliable for the purposes of reporting on the percentage of data matrices that these components classified as deficient, out of the portion of data matrices in their inventories that they have assessed for sufficiency. The Defense Logistics Agency did not provide estimates on either historical or requested IUID spending; the Army and Navy provided estimates on historical IUID spending; and the Marine Corps and Air Force provided estimates of both historical and requested IUID spending. We discussed with component officials the sources and methodology they used to produce the data on their historical IUID spending and fiscal year 2012 budget requests for IUID spending. The Marine Corps and Air Force provided data on both their historical and future spending. On the basis of our review of their sources and methodology for producing these data, we determined that the spending data provided by the Marine Corps and Air Force are sufficiently reliable for the purposes of reporting on these components’ historical spending and fiscal year 2012 IUID budget requests. As previously discussed, Army and Navy officials explained that their estimates on historical IUID spending are not complete. Although not complete, we determined that these data represent the best estimates of the Army and Navy on their historical IUID spending, and are sufficiently reliable for the purposes of reporting on the historical spending data that are available. In addition to the contact named above, Kimberly Seay, Assistant Director; Emily Biskup; Cindy Brown Barnes; Cynthia Grant; Neelaxi Lakhmani; Jason Lee; Alberto Leff; John Martin; Charles Perdue; Carol Petersen; Karen Richey; Darby Smith; Chris Turner; Cheryl Weissman; and Michael Willems made key contributions to this report. Defense Logistics: DOD Needs to Take Additional Actions to Address Challenges in Supply Chain Management. GAO-11-569. Washington, D.C.: July 28, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. DOD’s 2010 Comprehensive Inventory Management Improvement Plan Addressed Statutory Requirements, but Faces Implementation Challenges. GAO-11-240R. Washington, D.C.: January 7, 2011. DOD Business Transformation: Improved Management Oversight of Business System Modernization Efforts Needed. GAO-11-53. Washington, D.C.: October 7, 2010. DOD’s High-Risk Areas: Observations on DOD’s Progress and Challenges in Strategic Planning for Supply Chain Management. GAO-10-929T. Washington, D.C.: July 27, 2010. Department of Defense: Additional Actions Needed to Improve Financial Management of Military Equipment. GAO-10-695. Washington, D.C.: July 26, 2010. DOD’s High-Risk Areas: Actions Needed to Reduce Vulnerabilities and Improve Business Outcomes. GAO-09-460T. Washington, D.C.: March 12, 2009. Defense Logistics: Lack of Key Information May Impede DOD’s Ability to Improve Supply Chain Management. GAO-09-150. Washington, D.C.: January 12, 2009. DOD’s High-Risk Areas: Efforts to Improve Supply Chain Can Be Enhanced by Linkage to Outcomes, Progress in Transforming Business Operations, and Reexamination of Logistics Governance and Strategy. GAO-07-1064T. Washington, D.C.: July 10, 2007. Defense Logistics: Efforts to Improve Distribution and Supply Support for Joint Military Operations Could Benefit from a Coordinated Management Approach. GAO-07-807. Washington, D.C.: June 29, 2007. DOD’s High-Risk Areas: Progress Made Implementing Supply Chain Management Recommendations, but Full Extent of Improvement Unknown. GAO-07-234. Washington, D.C.: January 17, 2007. DOD’s High-Risk Areas: Challenges Remain to Achieving and Demonstrating Progress in Supply Chain Management. GAO-06-983T. Washington, D.C.: July 25, 2006.
IUID technology allows DOD to assign a unique number to an item and use that number to manage that item in a variety of logistics processes. In 2003, DOD began implementation of IUID and has estimated that it could improve the accountability and maintenance of its property and equipment and save from $3 billion to $5 billion per year. Also, integrating and sharing UII data across DOD’s enterprise information systems could enable DOD to track equipment as it moves between its components. GAO evaluated the extent to which DOD has (1) incorporated key elements of best management practices into its framework for IUID implementation, (2) marked items with IUID labels, and (3) developed the capability to share UII data across DOD in its enterprise information systems. GAO reviewed documents, interviewed cognizant officials, and reviewed DOD and GAO key practices for its analysis. The Department of Defense (DOD) has taken some steps to improve its approach to managing and implementing Item Unique Identification (IUID) technology, but has yet to incorporate some key elements of best management practices into its evolving framework for management of IUID implementation. These include internal controls and analysis of return on investment. DOD has included certain internal controls, such as defining key areas of authority for IUID implementation, and it is revising policy to incorporate IUID. However, DOD does not have performance measures, such as reliable schedules for predicting when its enterprise information systems will be able to manage items using IUID data, or a full estimate of IUID’s cost and benefits. Without a management framework that includes such key practices, DOD has faced challenges in implementing IUID technology and may not be well positioned to achieve potential financial and nonfinancial benefits. DOD’s data on the number of items already in its inventory—legacy items—marked with IUID labels to date is incomplete and DOD lacks assurance that contractors are sufficiently marking newly-acquired items and government-furnished property. The military services mark legacy items and have reported marking more than 2 million items. However, DOD does not have complete information on the total number of legacy items its components have marked and must mark in the future; does not have a full set of quantifiable goals or interim milestones corresponding to its IUID marking criteria—such as certain items that cost $5,000 or more—and does not use consistent criteria among its components to track progress. Without the components reporting complete and comparable data, DOD’s ability to assess progress in marking legacy items will remain limited. Also, DOD does not have assurance that contractors are sufficiently marking newly-acquired items and government-furnished property. DOD reported that as of January 2012, over 2,500 contractors had marked or registered over 11 million items. However, DOD does not require the components to examine and report on all types of contracts that should include IUID marking clauses, nor does it have policies and procedures that provide for systematic assessment of the sufficiency of data contained in these items’ labels. Hence, DOD cannot know the full extent to which contractors are supplying IUID labels with the data needed to track items. DOD’s ability to track and share unique item identifier (UII) data across components is hampered by the lack of full integration of data into components’ enterprise information systems. DOD has made some progress but faces challenges as it proceeds with its integration plans. DOD is revising its supply chain management policy and guidance to include IUID use, but has not fully defined requirements for using UII data, nor developed complete, integrated master schedules for integrating IUID, DOD-wide and within components’ systems. Such schedules enable agencies to predict the cost and timelines of their systems’ development. Without such requirements and schedules, DOD cannot adequately predict when the systems will be able to use UII data, or whether DOD will meet its fiscal year 2015 goal for using UII data to manage items throughout their life cycle. GAO is making nine recommendations for enhancing DOD’s implementation of IUID. They include actions to improve DOD’s management of IUID implementation through best practices; enable the components to report complete data for marking items with IUID labels; and enable the components to share UII data across DOD enterprise information systems. DOD concurred with eight recommendations and partially concurred with one related to updating estimated financial costs and benefits of IUID implementation. DOD stated it will continue to identify such costs, but GAO continues to believe that updating benefits is also important, as discussed more fully in the report.
The Disaster Relief Act included approximately $50 billion in supplemental appropriations for fiscal year 2013 to 19 agencies for 61 specific programs for expenses related to the consequences of Hurricane Sandy. Under the authority granted by the Balanced Budget and Emergency Deficit Control Act of 1985, as amended, OMB determined that these supplemental appropriations were to be included in the fiscal year 2013 base subject to sequestration under section 251A of that act. The amounts included in this report are drawn from the Disaster Relief Act as originally enacted, and are not adjusted to account for sequestration. Figure 1 shows the distribution of Sandy disaster relief funding by agency. Appendix II presents more detailed information on the supplemental appropriations provided by the Disaster Relief Act. Most of the agencies’ supplemental appropriations provided by the Disaster Relief Act are related to grant programs. As shown in figure 2, grant programs received more than $41 billion of the approximately $50 billion provided by the Disaster Relief Act. The Disaster Relief Act also provides an oversight framework for these funds in regard to improper payments and the recapture of unexpended grant funds. Specifically, the Disaster Relief Act states that all programs and activities receiving these funds shall be deemed “susceptible to significant improper payments,” for purposes of the Improper Payments Information Act of 2002 (IPIA), and funds for grants shall be expended by the grantees within the 24- month period following the agency’s obligation of funds for the grant, unless OMB waives this requirement for a particular grant program and submits a written justification for such waiver to the Committees on Appropriations of the U.S. Senate and the House of Representatives. The act states that agencies shall include a term in the grant that requires the grantee to return any funds to the agency that are not expended within this 24-month period. In addition, the Disaster Relief Act states that through September 30, 2015, the Recovery Accountability and Transparency Board (Recovery Board) shall develop and use information technology resources and oversight mechanisms to detect and remediate waste, fraud, and abuse in the obligation and expenditure of funds to support oversight of Sandy disaster relief funding. The act also states that the Recovery Board will coordinate its activities with OMB, each federal agency receiving appropriations related to the impact of Hurricane Sandy, and the IG of each such agency. As noted above, the Disaster Relief Act required OMB to establish criteria for agencies to use in developing their Sandy disaster relief internal control plans. Internal controls serve as the first line of defense in safeguarding assets and in preventing and detecting fraud, abuse, and errors. Given the magnitude of funding provided by the Disaster Relief Act, it is important for federal agencies to ensure that the funds appropriated under the act are used for their intended purposes. OMB established the criteria in M-13-07, which provides an overview of the internal control planning and reporting requirements for all programs funded under the act with a focus on (1) developing additional internal controls warranted beyond previously existing controls, (2) managing all Sandy disaster-related funding with the same discipline and rigor as programs that are traditionally designated as high risk for improper payments, and (3) managing unexpended grant funds. M-13-07 notes that as required by OMB Circular No. A-123, Management’s Responsibility for Internal Control (OMB Circular A-123), agencies must have established internal control plans to prevent waste, fraud, and abuse of federal program funds.management is responsible for establishing and maintaining internal control to achieve the objectives of effective and efficient operations, reliable financial reporting, and compliance with applicable laws and regulations. Specifically, OMB Circular A-123 states that As illustrated in figure 3, OMB directed agencies to describe incremental risks identified for each program administering Sandy disaster relief funding as well as the internal control strategy for mitigating each of these risks (if applicable). M-13-07 also discusses the roles of other parties involved in supporting Sandy disaster relief efforts, including the Recovery Board, the Hurricane Sandy Rebuilding Task Force (Task Force), and agency IGs. The Task Force was established on December 7, 2012, under Executive Order 13632.opportunities for federal agencies to work together to support recovery from Hurricane Sandy and to promote strong accountability for the use of the disaster relief funds. M-13-07 also notes that the Task Force is supported by a program management office that is working with agencies to ensure stakeholder engagement, establish performance metrics to gauge recovery efforts, and monitor the execution of Sandy disaster relief funding. Further, M-13-07 emphasizes that agency internal control plans should reflect consideration of early and frequent engagement between agencies and IGs to discuss issues affecting the Disaster Relief Act’s disaster-related programs and activities in order to identify and mitigate potential risk. M-13-07 states that the Task Force is responsible for identifying In addition to issuing M-13-07, OMB took steps to help agencies develop internal control plans for managing the risks related to Sandy funds. These activities occurred prior to and following the release of the guidance. On February 19, 2013, OMB sent the Chief Financial Officer community advance notice of the forthcoming OMB guidance. This notice identified minimum requirements for agency internal control plans that would be included and further explained in the OMB guidance. Also, as reported by OMB staff and agency officials, OMB met with the agencies to discuss agency risk assessments and the development of internal control plans. In accordance with M-13-07, each of the 19 agencies that received funds under the act submitted a Sandy disaster relief internal control plan with specific program details using the template provided by OMB. OMB guidance directed agencies to develop internal control plans based on incremental risk. We found that agencies identified incremental risk related to Sandy activities for 38 of the 61 programs receiving funding under the Disaster Relief Act. Our review of the internal control plans disclosed that agencies did not consistently apply M-13-07 in preparing these plans. Specifically, agencies’ plans ranged from providing most of the required information to not providing any information on certain programs. M-13-07 provides an overview of the internal control planning and reporting requirements for all programs funded under the act with a focus on three major areas: (1) additional internal controls for Sandy- related activities, (2) improper payments protocol, and (3) management of unexpended grant funds. M-13-07 states that agency internal control plans for Sandy-related program funding shall reflect consideration of elements such as conducting additional levels of review, increasing monitoring and oversight of grant recipients, continuing collaboration with the IG community, and expediting review and resolution of audit findings. The first element of additional internal control listed in M-13-07 is additional levels of review of award decisions, payment transactions, and other critical process elements that impact the use of Disaster Relief Act funds. This requirement applied to the 38 programs that identified incremental risk related to Sandy disaster relief funding. However, M-13- 07 notes that agencies should adopt more expansive review procedures, as appropriate. This allowed agencies to determine whether additional levels of review were necessary for their award decisions, payment transactions, and other critical process elements. M-13-07 did not require agencies to document their rationales for determining whether additional levels of review were appropriate. Table 1 summarizes the requirement to conduct additional levels of review per M-13-07. As illustrated in table 2, our review found that agencies’ discussion in their internal control plans of conducting additional levels of review for 38 programs that identified incremental risk related to Sandy activities varied. Certain agencies did not discuss additional levels of review for programs for which they identified incremental risk. Of the 38 programs that identified incremental risk, 8 programs did not discuss award decisions, 11 programs did not discuss payment transactions, and 12 programs did not discuss critical process elements that impact the use of Disaster Relief Act funds. However, it is not clear from the Sandy disaster relief internal control plans whether these agencies determined that additional levels of review were not appropriate for these programs. While the requirement for additional levels of review did not apply to the 23 programs that did not identify incremental risk, some agencies also discussed conducting additional levels of review for certain programs for which they did not identify incremental risk in their Sandy disaster relief internal control plans. Of the 23 programs that did not identify incremental risk, 5 programs discussed additional levels of review for award decisions, payment transactions, and other critical process elements. For example, one agency planned to add an additional level of review by establishing an executive council to make final decisions on project selection for its Hurricane Sandy funding. The second element of additional internal control listed in M-13-07 is increasing monitoring and oversight of grant recipients through (1) increased frequency and specificity of grantee reports, (2) additional site visits, and (3) additional technical assistance and training for grant recipients. This requirement applies to all 17 grant programs that identified incremental risk related to Sandy disaster relief funding. However, M-13-07 notes that agencies should adopt increased monitoring and oversight of grant recipients to the extent appropriate and possible under budgetary constraints. This allowed agencies to justify not designing controls for increased monitoring and oversight of grant recipients because of low program risk or budgetary constraints. M-13-07 did not require agencies to document their rationales for determining whether increased monitoring and oversight of grant recipients were appropriate. Table 3 summarizes the requirement to increase monitoring and oversight of grant recipients per M-13-07. As illustrated in table 4, our review found that agencies’ discussion in their internal control plans of increasing monitoring and oversight of grant recipients varied. For most of the 17 grant programs, agencies planned to increase monitoring and oversight mechanisms for their grant recipients. For example, one agency planned to increase monitoring and oversight of grant recipients by requiring financial and milestone progress reports from its Hurricane Sandy grantees on a monthly basis, rather than quarterly, as required of its other grantees. Conversely, certain agencies did not discuss additional monitoring and oversight of grant recipients for some grant programs. Specifically, of the 17 grant programs, 5 did not discuss increasing the frequency and specificity of grantee reporting, 6 did not discuss conducting additional site visits, and 9 did not discuss providing additional technical assistance and training to recipients. However, it is not clear from the Sandy disaster relief internal control plans whether these agencies determined that increasing monitoring and oversight of grant recipients was not necessary for these programs or not possible under budgetary constraints. The third element of additional internal control listed in M-13-07 is that agencies should continue early and frequent engagement with their respective IG. This requirement applied to all programs that identified incremental risk related to Sandy disaster relief funding. Table 5 summarizes the requirement to collaborate with the IG community per M-13-07. As illustrated in table 6, our review found that agencies discussed collaboration with their IGs for most programs, regardless of whether they identified incremental risk. For example, one agency noted in its Sandy disaster relief internal control plan that it planned to hold monthly meetings with its IG to discuss ongoing audits and foster additional coordination through participation in program conferences and training. While the requirement for continued collaboration with the IG community applied to the 38 programs that identified incremental risk, 15 programs that did not identify incremental risk also discussed continuing collaboration with their respective IGs in their Sandy disaster relief internal control plans. Of the 38 programs that identified incremental risk, 3 did not discuss continuing collaboration with the agency’s IG to identify and mitigate potential risk. The fourth element of additional internal control listed in M-13-07 is that agencies should expedite the review and resolution of audit findings. M- 13-07 states that agencies shall resolve all audit findings, which include findings from GAO, IG, and single audit reports, within 6 months after completion of the audit to the extent practicable. This requirement applied to all programs that identified incremental risk. Additionally, for grant programs that identified incremental risk, M-13-07 states that agencies should avoid granting extension requests for audit report submission and should explore the feasibility of conducting additional audit activities to review internal control procedures prior to funding the activity. Table 7 summarizes the requirement to expedite review and resolution of audit findings per M-13-07. As illustrated in table 8, our review found that agencies’ discussion in their internal control plans of expediting review and resolution of audit findings varied. While the requirement applied to the 38 programs that identified incremental risk, not all agencies discussed resolving all audit findings within 6 months after completion of the audit. Specifically, of the 38 programs, there were 12 programs that identified incremental risk and did not discuss expediting review and resolution of audit findings in their internal control plans. However, while the requirement applied to the 38 programs that identified incremental risk, 5 programs that did not identify incremental risk also discussed expediting review and resolution of audit findings in their Sandy disaster relief internal control plans. For the 17 grant programs, agencies did not discuss avoidance of granting extension requests for audit report submission and exploring the feasibility of conducting additional audit activities prior to funding the activity. Specifically, for the 17 grant programs, 14 did not discuss avoiding granting extension requests for audit report submission and 11 did not discuss exploring the feasibility of conducting additional audit activities prior to funding the activity. It is not clear from the Sandy disaster relief internal control plans whether agencies determined that these additional audit activities prior to funding the activity would not be feasible. The Disaster Relief Act states that all programs and activities receiving funds under the act shall be deemed to be “susceptible to significant improper payments” for purposes of IPIA. M-13-07 adds that all federal programs or activities receiving funds under the act are required to calculate and report an improper payment estimate. Additionally, M-13-07 notes that agencies shall manage all Sandy-related funding with the same discipline and rigor as programs that are traditionally designated as high risk for improper payments. Table 9 summarizes the requirement related to improper payments protocol. As illustrated in table 10, our review of agencies’ disaster relief internal control plans for all 61 programs found that agencies discussed developing a sampling methodology to produce and report an estimate of improper payments in the fiscal year 2014 reporting period for 38 programs. Agencies discussed improper payments, but did not discuss producing and reporting an estimate of improper payments for 11 programs. Agencies did not discuss improper payments for 12 programs. The Disaster Relief Act states that funds for grants shall be expended by the grantees within the 24-month period following the agency’s obligation of funds for the grant, unless OMB waives this requirement for a particular grant program and submits a written justification for such waiver to the Committees on Appropriations of the U.S. Senate and the House of Representatives. The act also states that agencies shall include a term in the grant that requires the grantee to return any funds to the agency that are not expended within this 24-month period. M-13-07 expands on the act by stating that agencies shall ensure that each proposed grant activity has clear timelines for execution and completion within the statutory period available for grantee expenditure. Table 11 summarizes the requirements related to the management of unexpended grant funds. As illustrated in table 12, our review found that some agencies’ internal control plans did not address OMB’s four requirements related to the management of unexpended grant funds for all 17 grant programs. However, it is not clear whether all of these four requirements apply to each grant program because agencies may be planning to request waivers of the 24-month expenditure requirement for certain of their grant programs. OMB issued guidance to provide oversight over Sandy disaster funding, which represents an important step toward accountability over these funds. Several weaknesses limited the effectiveness of this guidance in providing a comprehensive oversight mechanism for these funds. Specifically, the guidance (1) focused on the identification of incremental risks without adequate linkages to demonstrate that known risks had been adequately addressed, (2) provided agencies with significant flexibility without requirements for documentation or criteria for claiming exceptions, and (3) resulted in certain agencies’ developing their internal control plans at the same time that funds needed to be quickly distributed. The demand for rapid response and recovery assistance suggests that a proactive approach is needed in providing guidance to agencies to ensure accountability over disaster relief funding, prior to a disaster occurring. The internal control plans prepared by the agencies under M-13-07 were intended to mitigate incremental risk, and therefore they did not provide comprehensive information on all known risks and internal controls that may affect the programs that received the Sandy disaster funding. For many years, we and the IG community have identified internal control weaknesses in the federal government related to agencies receiving funds for disaster assistance. For example, following Hurricane Katrina, we reported on a number of internal control weaknesses related to contracting issues, such as federal agencies involved in responding to the disaster that had inadequate acquisition plans for carrying out their assigned responsibilities, insufficient knowledge of the market or unsound ordering practices that led to excessive or wasteful expenditures, and insufficient staff available for monitoring and oversight. We also identified control weaknesses related to grants management following Hurricanes Katrina and Rita, such as determining the amount of damage that was actually disaster related; sharing project information among intergovernmental participants during project development, and limitations in how the status of projects is tracked; and inadequate human capital capacity, especially early on in the recovery. Similarly, IGs have reported on internal control weaknesses related to accountability over disaster assistance. For example, IGs have reported that grantees did not complete their disaster relief projects in a timely manner and did not ensure the use of funds for intended purposes, and that states did not provide timely reporting on activity progress related to grant funding as some activities were not reported on until the projects were complete. When we compared the incremental risks identified by the agencies receiving funds for Sandy disaster relief with risks identified in prior GAO, IG, and financial statement audit reports related to grants management, contract management, improper payments, and other internal control weaknesses for programs receiving Sandy funding, we determined that some of the risks in these reports were not included in the Sandy disaster relief internal control plans. For example, one agency that reported that it will expend its Sandy disaster relief funds through contracts did not identify any incremental risks. Our review of prior GAO, IG, and financial statement audit reports found significant risks related to the agency’s contract management. According to Standards for Internal Control in the Federal Government, internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Management needs to comprehensively identify risks and should consider all significant interactions between the entity and other parties as well as internal factors at both the entity-wide and activity levels. Because the internal control plans prepared by the agencies are a subset of the complete set of risks related to programs receiving Sandy disaster relief funding, they are not effective for providing comprehensive oversight of Sandy disaster relief funds.assessment is necessary to help to ensure that agencies have considered all risks when designing internal controls. As described previously, OMB guidance listed various elements of additional internal control that at a minimum should have been reflected in the agencies’ internal control plans. However, the guidance included language that allowed agencies significant flexibility in deciding whether they needed to design additional internal controls. M-13-07 did not provide specific criteria for agencies to follow to claim exemptions from requirements, and the guidance did not require agencies to document their rationales for not including additional internal controls in their internal control plans. For example, M-13-07 states that agencies should conduct additional levels of review “as appropriate” and should increase monitoring and oversight of grant recipients “to the extent appropriate to mitigate risk and possible under budgetary constraints.” The guidance did not provide criteria for determining “appropriateness” or “budgetary constraints.” We found that some agencies did not discuss additional levels of review despite having identified incremental risk and did not discuss increased monitoring and oversight of grant recipients for some of their grant programs. Because M-13-07 did not require agencies to document their reasons for these omissions, the extent to which the agencies considered the need for these additional internal controls is not apparent from the Sandy disaster relief internal control plans. Additionally, M-13-07 required agencies to make an annual certification that the appropriate policies and controls were in place for activities and expenses related to Hurricane Sandy. M-13-07 provides agencies flexibility by stating that this annual certification for Hurricane Sandy funding “can be included” as part of the agencies’ annual assurance statements. According to OMB staff, OMB expected agencies to leverage their existing annual internal control review process performed in accordance with OMB Circular A-123 to include the internal controls related to activities and expenses funded by the Disaster Relief Act related to Hurricane Sandy. However, M-13-07 did not include specific requirements linking the annual review of controls to any additional control requirements for disaster-related funding. In light of the amount of funds involved and the risks associated with the funds provided by the Disaster Relief Act, on August 2, 2013, we sent a letter to the Director of OMB requesting consideration for sending written instructions to federal agencies to ensure that agency management includes the programs receiving funds for disaster assistance for Hurricane Sandy in their annual internal control reviews and assessments for fiscal year 2013. Such linkage between the incremental risks and mitigating controls related to disaster funding and efforts to address known internal control risks would be an important factor in providing comprehensive oversight of the internal control risks for the programs receiving disaster relief funds. In addition to the lack of comprehensive information on risks and internal controls, there is a risk that the incremental internal controls for Sandy disaster relief funding may not have been designed in time for its distribution. The Disaster Relief Act, which required OMB to issue guidance, was enacted on January 29, 2013. OMB had a short time frame to develop and issue the internal control guidance. As noted earlier, on February 19, 2013, OMB sent the Chief Financial Officer community advance notice of its impending guidance, and OMB finalized its guidance by issuing M-13-07 on March 12, 2013. In many cases, agencies developed and implemented the internal control plans at the same time that the funds needed to be quickly distributed. The Disaster Relief Act required agencies to submit their internal control plans by March 31, 2013, and agencies reported that they had already obligated approximately $4.6 billion as of that date. The limitations we identified in implementing M-13-07 illustrate that developing comprehensive internal control plans while a disaster unfolds is not feasible, and a proactive approach could help ensure that controls are designed timely. For example, OMB has provided standard procurement guidance, through its Emergency Acquisitions Guide, to assist the federal contracting community with carrying out procurement activities during disasters and other emergencies. As we have previously reported, following a disaster, decision makers face a tension between the demand for rapid response and recovery assistance— including assistance to victims—and implementing appropriate controls and accountability mechanisms. The risk for fraud and abuse grows when billions of dollars are being spent quickly. Weather-related events have cost the nation tens of billions of dollars in damages over the past decade. In our 2013 high-risk series, we reported that the United States Global Change Research Program has observed that the impacts and costliness of weather disasters will increase in significance, as what are considered “rare” events become more common and intense because of climate change. We previously reported that the growing number of disaster declarations—98 in fiscal year 2011 compared with 65 in 2004— has contributed to higher federal disaster costs. These impacts pose significant financial risks for the federal government, which owns extensive infrastructure, insures property through federal flood and crop insurance programs, provides technical assistance to state and local governments, and provides emergency aid in response to natural disasters. Without standard internal control guidance in place prior to future disasters, agencies may not be able to ensure that internal controls for disaster relief funding are effectively designed and timely implemented for all related funding. When disasters occur, the destruction caused by those disasters must be addressed immediately, and disaster relief funding must be delivered expeditiously. However, the risk for fraud and abuse increases when billions of dollars are being spent quickly. Our past work and that of the IG community has shown that effective controls and comprehensive accountability mechanisms for the use of resources related to a disaster are essential to ensure that resources are used appropriately. Relying on incremental disaster relief internal control plans cannot ensure that comprehensive information on risks and related internal controls will be adequate to ensure the safeguarding of disaster funds. Although M-13-07 represents an important step in the right direction, establishing more robust internal control guidance that can be applied to future disaster relief funding would allow agencies to proactively identify risks and develop internal controls prior to receiving such funding. Further, linking the additional risks identified in incremental plans to the complete set of known risks and related internal controls can help agency management and external entities, including Congress, to provide effective oversight of the funds. To proactively prepare for oversight of future disaster relief funding, we recommend that the Director of OMB develop standard guidance for federal agencies to use in designing internal control plans for disaster relief funding. Such guidance could leverage existing internal control review processes and should include, at a minimum, the following elements: robust criteria for identifying and documenting incremental risks and mitigating controls related to the funding and requirements for documenting the linkage between the incremental risks related to disaster funding and efforts to address known internal control risks. We requested comments on a draft of the report from the Director of the Office of Management and Budget or her designee. On November 14, 2013, staff from OMB’s Office of Federal Financial Management provided oral comments and stated that they generally agreed with our recommendation and requested additional information on the findings to inform future guidance. They also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, and the 19 agencies receiving funds under the Disaster Relief Act. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. The Disaster Relief Appropriations Act, 2013 (Disaster Relief Act),mandated GAO to review the design of the internal control plans prepared by federal agencies receiving funds under the Disaster Relief Act. This report addresses the extent to which (1) the internal control plans prepared by federal agencies complied with Office of Management and Budget (OMB) guidance and (2) OMB’s guidance was effective for providing comprehensive oversight of the internal control risks for the programs receiving funds for Sandy disaster relief. To determine the extent to which the internal control plans prepared by federal agencies complied with OMB guidance, we obtained the Sandy disaster relief internal control plans for the 19 federal agencies administering the 61 programs receiving funds under the Disaster Relief Act and compared them to OMB Memorandum M-13-07 (M-13-07). To determine the extent to which OMB’s guidance was effective for providing comprehensive oversight of the internal control risks for the programs receiving funds for Sandy disaster relief, we reviewed the internal control plans and M-13-07 against Standards for Internal Control in the Federal Government. We interviewed OMB staff and agency officials regarding the development and implementation of M-13-07. In addition, we compared the agencies’ identified incremental risks to prior GAO and inspector general (IG) findings associated with internal control risks for agency programs receiving funds for Sandy disaster relief. Specifically, we reviewed the following: GAO, High-Risk Series: An Update, GAO reports and findings from 2010 to 2013 that focused on programs receiving funding under the Disaster Relief Act, and GAO work related to Hurricane Katrina or the American Recovery and Reinvestment Act of 2009; agencies’ IG reports from 2010 to 2013 that focus on programs receiving Disaster Relief Act funds; agencies’ fiscal year 2012 financial statement auditor’s reports, including reports on internal control over financial reporting and reported noncompliance with laws and regulations, fiscal year 2012 reported improper payments, and management’s statement of assurance related to 31 U.S.C. § 3512(c)-(d), commonly known as the Federal Managers’ Financial Integrity Act, and OMB Circular No. A- 123; agencies’ fiscal year 2012 annual reviews of programs and identification of those susceptible to significant improper payments. In addition, we obtained information from agencies regarding the status of obligations of Sandy disaster relief funding and the impact of sequestration on these funds. We also obtained information from agency IGs regarding their ongoing or planned audit work related to Sandy disaster relief funding. We conducted this performance audit from March 2013 to November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 13 presents the federal agencies and programs or appropriation accounts receiving funding under the Disaster Relief Act. Under the authority granted by the Balanced Budget and Emergency Deficit Control Act of 1985, as amended, OMB determined that these supplemental appropriations were to be included in the fiscal year 2013 base subject to sequestration under section 251A of that act. The amounts included in this report are drawn from the Disaster Relief Act as originally enacted and are not adjusted to account for sequestration. In addition to the contact named above, Michael Hansen (Assistant Director), Kim McGatlin (Assistant Director), Gloria Cano, Oliver Culley, Francine DelVecchio, Gabrielle Fagan, Patrick Frey, James Healy, Wilfred Holloway, Jason Kelly, Jason Kirwan, Felicia Lopez, Andrew Seehusen, Danietta Williams, and Matthew Zaun made key contributions to this report.
In late October 2012, Hurricane Sandy devastated portions of the Mid-Atlantic and northeastern United States, leaving victims of the storm and their communities in need of financial assistance for disaster relief aid. On January 29, 2013, the President signed the Disaster Relief Appropriations Act, 2013, which provided approximately $50 billion in supplemental appropriations, before sequestration, to 61 programs at 19 federal agencies for expenses related to the consequences of Hurricane Sandy. The act required agencies to submit internal control plans for the funds in accordance with OMB criteria by March 31, 2013. The act mandated GAO to review the design of agencies' internal control plans. This report addresses the extent to which (1) the internal control plans prepared by federal agencies complied with OMB guidance and (2) OMB's guidance was effective for providing comprehensive oversight of the internal control risks for the programs receiving funds for Sandy disaster relief. To address these objectives, GAO reviewed agencies' Sandy disaster relief internal control plans; M-13-07; and relevant GAO, inspector general, and financial statement audit reports. GAO also reviewed the internal control plans and M-13-07 against internal control standards. In response to the Disaster Relief Appropriations Act, 2013, agencies prepared Hurricane Sandy disaster relief internal control plans based on Office of Management and Budget (OMB) guidance but did not consistently apply the guidance in preparing these plans. OMB Memorandum M-13-07 (M-13-07), Accountability for Funds Provided by the Disaster Relief Appropriations Act, directed federal agencies to provide a description of incremental risks they identified for Sandy disaster relief funding as well as an internal control strategy for mitigating these risks. Each of the 19 agencies responsible for the 61 programs receiving funds under the act submitted an internal control plan with specific program details using a template provided by OMB. Agencies' plans ranged from providing most of the required information to not providing any information on certain programs. For example, each of the 61 programs was required to discuss its protocol for improper payments; however, GAO found that 38 programs included this information, 11 included partial information, and 12 included no information. OMB's guidance was an important step in the oversight of Sandy disaster funding, addressing internal controls, improper payments protocol, and unexpended grant funds. However, several weaknesses limited its effectiveness in providing a comprehensive oversight mechanism for these funds. Specifically, the guidance (1) focused on the identification of incremental risks without adequate linkages to demonstrate that known risks had been adequately addressed, (2) provided agencies with significant flexibility without requirements for documentation or criteria for claiming exceptions, and (3) resulted in certain agencies developing their internal control plans at the same time that funds needed to be quickly distributed. GAO found that OMB guidance: Asked agencies to focus on mitigating incremental risk, so the resulting plans did not provide comprehensive information on all known risks and internal controls that may affect the programs that received funding. Linking the additional risks identified in the plans to the complete set of known risks and related internal controls can help agency management and Congress to provide effective oversight of the funds. Allowed agencies significant flexibility in deciding whether they needed to design additional internal controls, and did not provide specific criteria for agencies to claim exemptions from requirements. GAO found that some agencies did not discuss certain additional internal controls in their plans, despite having identified incremental risks. Did not require agencies to document their rationales for not including additional internal controls in their plans. As a result, it was not apparent from the internal control plans the extent to which the agencies considered the need for these additional internal controls. Was developed and issued in a short time frame in response to the act. By the time that the agencies submitted their internal control plans on March 31, 2013, they reported that they had already obligated approximately $4.6 billion. Standard internal control guidance for disaster funding could help ensure that controls are designed timely. GAO recommends that OMB develop more robust guidance for agencies to design internal control plans for future disaster relief funding. OMB staff generally agreed with GAO's recommendation.
Foster care begins when a child is removed from his or her parents or guardians and placed under the responsibility of a state child welfare agency. Removal from the home can occur because of physical abuse or neglect. It can also occur when a child’s own behavior or condition is beyond the control of his or her family or poses a threat to their community. Foster care may be provided by a family member, caregivers previously unknown to the child, or a group home or institution. Ideally, foster care is an intermediate step towards a permanent family home. When reuniting the child with his or her parents or guardian is not in the child’s best interest, caseworkers seek a new permanent home for the child, such as an adoptive home or guardianship. However, some children remain in foster care until they reach adulthood. As we have previously reported, children in foster care exhibit more numerous and serious medical conditions, including mental health conditions, than do other children. States are responsible for administering their Medicaid and foster care programs; the programs are overseen at the federal level by HHS through CMS and ACF, respectively. HHS may issue regulations, provide guidance on some issues, or simply provide informational resources for states to consider for their programs, the latter being the case for psychotropic drugs provided to children in state custody. Among these resources are best principles developed by AACAP, a nonprofit professional association. While HHS does not require states to follow these guidelines, AACAP developed them as a model to help inform state monitoring programs for youth in state custody. AACAP guidelines point out that, “as a result of several highly publicized cases of questionable inappropriate prescribing, treating youth in state custody with psychopharmacological agents has come under increasingly intense scrutiny,” leading to state implementation of consent, authorization, and monitoring procedures. More recently, Congress passed the Child and Family Services Improvement and Innovation Act in September 2011, requiring states that apply for certain federal child welfare grants to establish protocols for the appropriate use and monitoring of psychotropic drugs prescribed to foster children. The use of psychotropic drugs has been shown to effectively treat mental disorders, such as attention deficit hyperactivity disorder (ADHD), bipolar disorder, depression and schizophrenia. While many psychotropic drugs that have been approved by the FDA as safe and effective in adults have not been similarly approved for children of all ages, prescribing them to children is legal and common medical practice in many instances. According to the National Institute of Mental Health (NIMH), some children with severe mental health conditions would suffer serious consequences without such medication. However, psychotropic drugs can also have serious side effects in adults, including irreversible movement disorders, seizures, and an increased risk for diabetes over the long term. Further, additional risks these drugs pose specifically to children are not well understood. Psychotropic drugs affect brain activity associated with mental processes and behavior. These drugs are also called “psychotherapeutic” drugs. While psychotropic drugs can have significant benefits for those with mental illnesses, they can also have side effects ranging from mild to serious. Table 1 highlights the psychotropic drug classes studied in this report and provides examples of drugs within those classes, as well as conditions treated and possible side effects. Foster children in each of the five selected states were prescribed psychotropic drugs at higher rates than were nonfoster children in Medicaid during 2008. These states spent over $375 million for prescriptions provided through fee-for-service programs to foster and nonfoster children. The higher rates do not necessarily indicate inappropriate prescribing practices, as they could be due to foster children’s greater exposure to traumatic experiences and the unique challenges of coordinating their medical care. However, psychotropic drug claims for foster children were also more likely to show the indicators of potential health risks that we established with our experts. According to our experts, no evidence supports the concomitant use of five or more psychotropic drugs in adults or children, yet hundreds of both foster and nonfoster children were prescribed such a medication regimen. Similarly, thousands of foster and nonfoster children were prescribed doses exceeding maximum levels cited in guidelines based on FDA-approved drug labels, which our experts said increases the potential for adverse side effects, and does not typically increase the efficacy of the drugs to any significant extent. Further, foster and nonfoster children under 1 year old were prescribed psychotropic drugs, which our experts said have no established use for mental health conditions in infants and could result in serious adverse effects. The kinds of drugs included in prescription data reported to CMS in 2008 varied by state. Because the claims data we obtained from CMS contained fewer types of medications for Michigan and Oregon, we may understate the rates of psychotropic prescriptions for both foster and nonfoster children in those states. While rates of psychotropic prescriptions are not comparable across states, they are comparable between foster and nonfoster children within the same state. Similarly, the ratio of prescriptions to foster children to prescriptions to nonfoster children is comparable across states. history.various age ranges. Comparing the selected states’ monitoring programs for psychotropic drugs provided to foster children with AACAP’s guidelines indicates that, as of October 2011, each of the state programs falls short of providing comprehensive oversight as defined by AACAP. Though states are not required to follow these guidelines, the six states we examined had developed monitoring programs that satisfied some of AACAP’s best principles guidelines to varying degrees. Such variation is not surprising given that states set their own oversight guidelines and have only recently been required, as a condition of receiving certain federal child welfare grants, to establish protocols for the appropriate use and monitoring of psychotropic drugs prescribed to foster children. HHS has provided limited guidance to the states on how to improve their control measures to monitor psychotropic drug prescriptions to foster children. Without formally endorsing specific oversight measures for states to implement, HHS conducts state reviews and provides other online resources, including the AACAP guidelines, to help states improve their programs. ACF performs Child and Family Services Reviews (CFSR) of states to ensure conformity with federal child welfare requirements—which include provisions for safety, permanency, and family and child well-being—and to assist states as they enhance their capacity to help families achieve positive outcomes. These reviews include the examination of a limited number of children’s case files, in part to determine whether the state foster care agency conducted assessments of children’s mental health needs and provided appropriate services to address those needs. However, these reviews are not designed to identify specific potential health risk indicators related to psychotropic medications, and since they occur every 2 to 5 years, states cannot rely on these reviews to actively monitor prescriptions. In addition, ACF operates technical assistance centers and provides online resources such as links to state guidance on psychotropic drug oversight, academic studies on psychotropic drugs, and recordings of teleconferences related to the oversight of psychotropic drugs. While HHS makes a variety of resources available to states developing oversight programs for psychotropic drugs, it has not endorsed any specific guidance. In the absence of HHS-endorsed guidance, states have developed varied oversight programs that in some cases fall short of AACAP’s recommended guidelines. The AACAP guidelines are arranged into four categories, including consent, oversight, consultation, and information sharing, that contain practices defined as minimal, recommended, or ideal. The following describes the extent to which the selected states’ monitoring programs cover these areas. Consent: According to interviews and documentation provided by state Medicaid and foster care officials, all six selected states have implemented some practices consistent with AACAP guidelines for consent procedures, though in varying scope and application. According to AACAP, the consent process should be documented and monitored to ensure that caregivers are aware of relevant information, such as the child’s diagnosis, expected benefits and risks of treatments, common side effects, and potentially severe adverse events. Thus, states that do not incorporate consent procedures similar to AACAP’s guidelines may increase the likelihood that caregivers are not fully aware of the risks and benefits associated with the decision to medicate with psychotropic drugs, and may limit the caregiver’s ability to accurately assess and monitor the foster child’s reaction to the drugs. Table 4 lists AACAP’s guidelines relative to consent and illustrates the extent to which states have implemented those guidelines. Florida and Michigan provide examples of how states vary in their approach to monitoring consent procedures used for psychotropic drugs prescribed to foster children. For example, Florida requires all prescribers to obtain a standardized written consent form from the parental or legal guardian, or a court order, before a psychotropic drug is administered. The consent form includes the diagnosis, dosage, target symptoms, drug risks and benefits, drug monitoring plan, alternative treatment options, and discussions about the treatment between the child and the parent or legal guardian. Florida law identifies who is authorized to give consent, and obtains assent for psychotropic drug management from minors when age and developmentally appropriate. Florida provides required training to caseworkers, but the names and indications for use of commonly prescribed psychotropic drugs are not included. In contrast, Michigan has policies identifying who is authorized to give consent to foster children, but does not use a standardized consent form that can be used to help inform consent decisions. Instead, Michigan requires that caseworkers maintain in their files the consent forms used by individual prescribers, which likely vary in content and may thus vary in helpfulness to consent givers. Moreover, Michigan does not have training requirements in place to help caseworkers, court personnel, and foster parents become more effective advocates for children in their custody. Training for caseworkers is optional, but according to an agency official, the training was unavailable because no trainer had been hired as of September 2011. Michigan does not have policies for obtaining assent from minors when possible, thus meeting only one of AACAP’s guidelines for consent procedures. Oversight procedures: Each of the six states has developed some procedures similar to AACAP’s guidelines for overseeing psychotropic drug prescriptions for foster children, as evidenced by interviews and documentation provided by state Medicaid and foster care officials. According to one study, states that implement standards to improve oversight of the use of psychotropic drugs may create enhanced continuity of care, increased placement stability, reduced need for psychiatric hospitalization, and decreased incidence of adverse drug reactions. As such, states that do not incorporate oversight procedures similar to AACAP’s recommendations limit their ability to identify the extent to which potentially risky prescribing is occurring in the foster care population. Table 5 lists AACAP’s guidelines relative to oversight and illustrates the extent to which selected states have implemented those guidelines. Texas and Maryland provide examples of how states vary in their approach to oversight of psychotropic drug use among foster children. For example, the Texas Department of Family and Protective Services (DFPS) and the University of Texas at Austin College of Pharmacy assembled an advisory committee that included child and adolescent psychiatrists, psychologists, pediatricians, and other mental health professionals to develop psychotropic drug use parameters for foster children. These parameters are used to help identify cases requiring additional review. Factors that trigger additional reviews include dosages exceeding usual recommended levels, prescriptions for children of very young age, concomitant use of five or more psychotropic drugs, and prescriptions by a primary care provider lacking specialized training. According to the Texas foster care agency’s data analysis, after Texas released these guidelines in 2005, psychotropic drug use among Texas foster care children declined from almost 30 percent in fiscal year 2004 to less than 21 percent in fiscal year 2010. Texas also analyzes Medicaid claims data to monitor psychotropic drug prescriptions for foster children and to identify any unusual prescribing behaviors. Texas provides quarterly reports to child welfare officials on the use of psychotropic drugs among foster children and treating clinicians have access to a child’s medical records on a 24-hour basis. However, the electronic health record system does not always capture the child’s height, weight, and allergies, which is optional for prescribers to enter into the system. This information is helpful as a child’s weight may be used to determine the recommended dose for some medications, while allergy information may be used to determine whether a child should take a particular medication. In addition, ongoing medical problems are not recorded in the electronic health record system and Texas does not measure the rate of adverse reactions at the macro level among youth in state custody. Maryland fully applies only one of the six AACAP guidelines for oversight procedures and partially applies others. Maryland provides foster children in out-of-home placement with a “medical passport” that serves as a record of the child’s previous and current medical file. Each topic included in AACAP’s guidelines for maintaining ongoing medical records, including diagnoses, allergies, and medical history, is documented in the passport, and an additional copy of the passport is kept in the child’s case record and maintained electronically. However, Maryland has not produced any specific guidelines for the use of all psychotropic prescriptions among foster children, thus limiting the state’s ability to identify potentially risky prescribing practices for the foster child population. Without guidelines for psychotropic drugs, there are no criteria to help agency officials monitor the appropriateness of prescriptions. Moreover, Maryland does not review Medicaid claims data statewide specifically for foster children, and therefore does not produce quarterly reports to identify the rate and types of drugs used in the foster care population that could help identify and monitor prescribing trends. In addition, as stated earlier, Maryland’s 2008 foster care data were found unreliable. Maryland officials told us that transitioning to a new records system in 2007 resulted in incorrect and missing data for foster children. Consultation program: According to interviews and documentation provided by state Medicaid and foster care officials, five of the six states have implemented some of AACAP’s guidelines for consultation, but only one of the six selected states has implemented a consultation program that ensures all consent givers and prescribers are able to seek advice from child and adolescent psychiatrists before making consent decisions for foster children. States that do not have a consultation program to help link consent givers and prescribers with child and adolescent psychiatrists may reduce the extent to which prescribers and consent givers are informed about the expected benefits and risks of treatments, alternative treatments, and the risks associated with no treatment. Table 6 lists the AACAP guidelines relative to consultation programs and illustrates the extent to which selected states have implemented those guidelines. Massachusetts and Oregon provide examples of how states vary their approach in providing expert consultations to caregivers. For example, Massachusetts’s foster care agency started an initiative to connect child welfare staff to Medicaid pharmacists who can provide information on medications and the foster child’s drug history, including interactions between any current and proposed drugs. In addition, primary care physicians who treat children, including foster care children, also have access to the state-funded Massachusetts Child Psychiatry Access Project, a system of regional children’s mental health consultation teams designed to help pediatricians find and consult with child psychiatrists. Massachusetts has six child psychiatrists who are available to provide consultations on a part-time basis to child welfare staff, but these consultations are not available for other consent givers such as foster parents. The foster care agency’s consultation program also provides face-to-face evaluations of foster children at the request of consent givers concerned about a child’s treatment. In early 2009, Oregon put a consultation program in place to help consent givers make informed decisions. In 2010, Oregon’s foster care agency shifted the responsibility for all consent decisions where the agency has legal custody or is the legal guardian of the child from foster parents to child welfare agency officials, who now have access to a child and adolescent psychiatrist and can seek consultations before making consent decisions. However, the consultation program does not conduct face-to-face evaluations of children—by a child and adolescent psychiatrist—at the request of consent givers, nor does it enable prescribing physicians to consult with child and adolescent psychiatrists. Oregon has plans for the development of the Oregon Psychiatric Access Line for Kids, which would allow primary care physicians and nurse practitioners to consult with child psychiatrists, but agency officials told us the program is not operational due to a lack of funding. Information sharing: Four of the six selected states have created websites with information on psychotropic drugs for clinicians, foster parents, and other caregivers. Access to comprehensive information can help ensure that clinicians, foster parents, and other interested parties are fully informed about the use and management of psychotropic drugs. Table 7 lists AACAP’s guidelines relative to information sharing and illustrates the extent to which selected states have implemented those guidelines. For example, Florida’s foster care agency has partnered with the University of South Florida to implement Florida’s Center for the Advancement of Child Welfare Practice to provide needed information and support to Florida’s professional child welfare stakeholders. The program’s website is consistent with four of AACAP’s six guidelines for information sharing. For example, the website includes policies and procedures governing psychotropic drug management, staff publications and educational materials about psychotropic drugs, consent forms, and links to other informative publications and news stories related to foster children and psychotropic drugs. However, the website does not provide reports on prescription patterns for psychotropic drugs or adverse effect rating forms. In comparison, Oregon’s foster care agency developed a website that includes information regarding psychotropic medication, but the website is not updated regularly to operate as an ongoing information resource. The website currently has information on state policies and procedures governing the use of psychotropic drugs and also contains web links to consent forms and a medication chart that can be used as a psychotropic medication reference tool. However, the website does not meet three of the six information-sharing guidelines, including those on posting adverse effect rating forms, reporting prescription patterns, and providing links to other informative websites. States with less accessibility to comprehensive information may limit the extent to which physicians, foster parents, and other interested parties are informed about the use and management of psychotropic drugs. The higher rates of psychotropic drug prescriptions among foster children may be explained by their greater mental health needs and the challenges inherent to the foster care system. However, thousands of foster and nonfoster children in the five states we analyzed were found to have prescriptions that carry potential health risks. While doctors are permitted to prescribe these drugs under current laws, increasing the number of drugs used concurrently and exceeding the maximum recommended dosages for certain psychotropic drugs have been shown to increase the risk of adverse side effects in adults. Prescriptions for infants are also of concern, due to the potential for serious adverse effects even when these drugs are used for non-mental health purposes. Comprehensive oversight programs would help states identify these and other potential health risks and provide caregivers and prescribers with the information necessary to weigh drug risks and benefits. The recently enacted Child and Family Services Improvement and Innovation Act requires states to establish protocols for monitoring psychotropic drugs prescribed to foster children. Under the act, each state is authorized to develop its own monitoring protocols, but HHS-endorsed, nationwide guidelines for consent, oversight, consultation, and information sharing could help states close the oversight gaps we identified and increase protections for this vulnerable population. In our draft report, we recommended that the Secretary of HHS evaluate our findings and consider endorsing guidance to state Medicaid and child welfare agencies on best practices for monitoring psychotropic drug prescriptions for foster children, including guidance that addresses, at minimum, informed consent, oversight, consultation, and information sharing. We have received written comments on our draft report from HHS and relevant agencies in 6 states. In written comments, HHS agreed with our recommendation and provided technical comments, which we incorporated as appropriate. In written comments and exit conferences, staff from state Medicaid and foster care agencies provided comments on key facts from the report. Agency comments will be incorporated and addressed in a written report that will be issued in December 2011. Chairman Carper, Ranking Member Brown, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For additional information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Appendix I: Print-friendly version of figure 1 and figure 2 This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Foster children have often been removed from abusive or neglectful homes and tend to have more mental health conditions than other children. Treatment may include psychotropic drugs but their risks to children are not well understood. Medicaid, administered by states and overseen by the Department of Health and Human Services (HHS), provides prescription drug coverage to foster children. This testimony examines (1) rates of psychotropic prescriptions for foster and nonfoster children in 2008 and (2) state oversight of psychotropic prescriptions for foster children through October 2011. GAO selected Florida, Maryland, Massachusetts, Michigan, Oregon, and Texas primarily based on their geographic diversity and size of the foster care population. Results cannot be generalized to other states. In addition, GAO analyzed Medicaid fee-for-service and foster care data from selected states for 2008, the most recent year of prescription data available at the start of the audit. Maryland's 2008 foster care data was unreliable. GAO also used expert child psychiatrists to provide a clinical perspective on its methodology and analysis, reviewed regulations and state policies, and interviewed federal and state officials. Foster children in the five states GAO analyzed were prescribed psychotropic drugs at higher rates than nonfoster children in Medicaid during 2008, which according to research, experts consulted, and certain federal and state officials, could be due in part to foster children's greater mental health needs, greater exposure to traumatic experiences and the challenges of coordinating their medical care. However, prescriptions to foster children in these states were also more likely to have indicators of potential health risks. According to GAO's experts, no evidence supports the concomitant use of five or more psychotropic drugs in adults or children, yet hundreds of both foster and nonfoster children in the five states had such a drug regimen. Similarly, thousands of foster and nonfoster children were prescribed doses higher than the maximum levels cited in guidelines developed by Texas based on FDA-approved labels, which GAO's experts said increases the risk of adverse side effects and does not typically increase the efficacy of the drugs to any significant extent. Further, foster and nonfoster children under 1 year old were prescribed psychotropic drugs, which GAO's experts said have no established use for mental health conditions in infants; providing them these drugs could result in serious adverse effects. Selected states' monitoring programs for psychotropic drugs provided to foster children fall short of best principle guidelines published by the American Academy of Child and Adolescent Psychiatry (AACAP). The guidelines, which states are not required to follow, cover four categories. (1) Consent: Each state has some practices consistent with AACAP consent guidelines, such as identifying caregivers empowered to give consent. (2) Oversight: Each state has procedures consistent with some but not all oversight guidelines, which include monitoring rates of prescriptions. (3) Consultation: Five states have implemented some but not all guidelines, which include providing consultations by child psychiatrists by request. (4) Information: Four states have created websites about psychotropic drugs for clinicians, foster parents, and other caregivers. This variation is expected because states set their own guidelines. HHS has not endorsed specific measures for state oversight of psychotropic prescriptions for foster children. HHS-endorsed guidance could help close gaps in oversight of psychotropic prescriptions and increase protections for these vulnerable children. In our draft report, GAO recommended that HHS consider endorsing guidance for states on best practices for overseeing psychotropic prescriptions for foster children. HHS agreed with our recommendation. Agency comments will be incorporated and addressed in a written report that will be issued in December 2011.
In February 2011, Boeing won the competition to develop the Air Force’s next generation aerial refueling tanker aircraft, the KC-46. To develop a tanker, Boeing modified a 767 aircraft in two phases. In the first phase, Boeing modified the design of the 767 with a cargo door and an advanced flight deck display borrowed from its 787 aircraft and is calling this modified version the 767-2C. The 767-2C is built on Boeing’s existing production line. In the second phase, the 767-2C was militarized and brought to a KC-46 configuration. The KC-46 will allow for two types of refueling to be employed in the same mission—a refueling boom that is integrated with a computer- assisted control system and a permanent hose and drogue refueling system. The boom is a rigid, telescoping tube that an operator on the tanker aircraft extends and inserts into a receptacle on the aircraft being refueled. See figure 1 for an example of boom refueling. The “hose and drogue” system is comprised of a long, flexible refueling hose and a parachute-like metal basket that provides stability. Drogue refueling is available via the centerline drogue system in the middle of the aircraft, or via a wing air refueling pod (WARP) located on each wing. WARPs are used for simultaneous refueling of two aircraft. See figure 2 for a depiction of the conversion of the 767 aircraft into the KC-46 tanker with the boom deployed. The Federal Aviation Administration has previously certified Boeing’s 767 commercial passenger airplane (referred to as a type certificate) and is to certify the design for both the 767-2C and the KC-46 with Amended and Supplemental type certificates, respectively. The Air Force is then responsible for certifying the airworthiness of the KC-46. The Air Force is also to verify that the KC-46 systems meet contractual requirements and that the KC-46 and various receiver aircraft are certified for refueling operations. Boeing was awarded a fixed price incentive (firm target) contract for development. The contract is designed to hold Boeing accountable for costs associated with the design, manufacture, and delivery of four test aircraft and includes options to manufacture the remaining 175 aircraft. A fixed price incentive development contract was awarded for the program because KC-46 development is considered to be a relatively low-risk effort to integrate mostly mature military technologies onto an aircraft designed for commercial use. The contract limits the government’s financial liability and provides the contractor incentives to reduce costs in order to earn more profit. It also specifies that Boeing must correct any deficiencies and bring development and production aircraft to the final configuration at no additional cost to the government. The contract includes firm fixed price contract options for the first 2 production lots, and options with not-to-exceed fixed prices for production lots 3 through 13. The Air Force has already exercised the first 3 production lots totaling 34 aircraft and negotiated firm fixed prices for production lot 3. The original development contract requires Boeing to deliver 18 operational aircraft, 9 WARP sets and 2 spare engines by August 2017. The contract refers to this as required assets available, while we refer to it as fully capable aircraft in this report. In addition, according to the contract, all required training must be complete, and the required support equipment and sustainment support must be in place by August 2017. Barring any changes to KC-46 requirements by the Air Force, the development contract specifies a ceiling price of $4.9 billion for Boeing to develop the first 4 aircraft, at which point Boeing must assume responsibility for all additional costs. Due to several development-related problems experienced over the last 2 years, Boeing currently estimates that development costs will total about $5.9 billion, or about $1 billion over the ceiling price. The government is not responsible for the additional cost. The KC-46 program is meeting total acquisition cost and performance targets, but has experienced some recent schedule delays. The government’s cost estimate has declined for a fourth consecutive year and is now about $7.3 billion less than the original estimate. In addition, the aircraft is projected to meet all performance capabilities. However, Boeing experienced some problems developing the aircraft. As a result, it now expects to deliver the first 18 fully capable aircraft in October 2018 instead of August 2017, 14 months later than expected. The Air Force is continuing to work within its total program acquisition cost estimate for the KC-46, which includes development, procurement, and military construction costs. The total program acquisition cost now stands at $44.4 billion. This is about $7.3 billion less than the original estimate of $51.7 billion or about 14 percent less. Average program acquisition unit costs have decreased by the same percent because quantities have remained the same. Table 1 provides a comparison of the initial and current quantity and cost estimates. The Air Force has been able to decrease its cost estimate over the past 4 years primarily because it has not added or changed requirements and therefore there were fewer engineering changes than expected. According to program officials, the Air Force’s initial cost estimate included a large amount of risk funding for possible requirements changes, based on its experience with prior major acquisition programs. Military construction costs have also come in below estimates. The program estimates that the KC-46 will achieve its performance capabilities. This includes 9 key performance parameters and 5 key system attributes that are critical to the aircraft’s military capability and 7 technical performance capabilities that track progress to meeting contract specifications. For example, the aircraft is expected to be ready for operational use when required at least 89 percent of the time and, once it is deployed for an aerial refueling mission, be able to complete that mission 92 percent of the time. Appendix I provides a description of each of the key performance parameters and system attributes as well as the status of technical performance capabilities. The program has collected actual test data that validates a few of the performance capabilities. For example, the aircraft is using less than 1,557 gallons of fuel per flight hour, its fuel usage rate target. In addition, the program also closely tracks the actual weight of the aircraft because weight has a direct effect on the amount of fuel that can be carried. As of January 2017, the program had approximately 595 pounds of margin to the operational empty weight target of 204,000 pounds. The program also tracks a reliability growth metric—the mean time between unscheduled maintenance events due to equipment failure—and set a reliability goal of 2.83 flight hours between these events by the time the aircraft reaches 50,000 flight hours. According to program officials, as of September 2016, the program had completed about 1,300 flight hours and was achieving 1.56 hours compared to its goal of 1.72 hours by that time. Program officials believe that the reliability will improve as additional flight hours are completed and as unreliable parts are identified and replaced. Program officials also report that the program does not yet have actual flight test data to validate many of the other key and technical performance capabilities, such as those for operational availability and mission capability mentioned above. In lieu of flight test data, it assesses the measures on a monthly basis, relying on other information such as data from ground testing; models and simulations; and prior tanker programs. Test officials eventually expect to collect and analyze this data through flight testing. In some cases the program will be tracking progress towards achieving some performance capabilities while the aircraft is in operation. For example, in addition to the reliability growth metric mentioned above, Boeing is expected to demonstrate that mechanical problems on the aircraft can be fixed within 12 hours at least 71 percent of the time once the aircraft has accumulated 50,000 flight hours. Since our last report in April 2016, the Under Secretary for Acquisition, Technology and Logistics approved the KC-46 program to enter low-rate initial production in August 2016, one year later than originally planned. In addition, the Air Force has exercised contract options for the first 3 low- rate production lots of aircraft. We previously reported that the delay to the low-rate initial production decision was the result of problems Boeing had wiring the aircraft, design issues discovered with the fuel system components, and a fuel contamination event that corroded the fuel tanks of one of the development aircraft. Those problems have been overcome, but time was lost working through them. Until the low-rate initial production decision, the program had met its major milestones. Boeing and KC-46 program officials modified the program schedule in January 2017 to reflect the work remaining, including obtaining Federal Aviation Administration confirmation that the aircraft’s parts all match their design drawings. While the Federal Aviation Administration has approved the design of many aircraft components, it is expected that the WARPs will be the last subsystem to receive design approval for all of its parts and to demonstrate that the parts conform to the designs. According to Boeing officials, the company and its WARP supplier had underestimated the level of design drawing details the Federal Aviation Administration needed to review to determine that the parts conformed to the approved design. According to these officials, the WARP supplier has been negotiating with its various sub-tier suppliers over the past 3 years for the necessary design documentation. Program officials estimate that the WARP design will be approved by the Federal Aviation Administration in July 2017, which will then allow Boeing to complete remaining developmental flight tests and meet other key milestones. Program officials do not consider the WARP design to be a significant program risk because the WARPs performed well in flight testing leading up to the low- rate initial production decision. Changes to key milestones are shown in table 2. Overall, the current schedule reflects a 14-month delay in Boeing delivering the first 18 aircraft with 9 WARP sets under the terms of the development contract, referred to as 18 fully capable aircraft in table 2. Instead of meeting an August 2017 date, the program office now estimates that Boeing will deliver the first 18 aircraft by February 2018 and the 9 WARP sets separately by October 2018. Air Force officials are negotiating for considerations from Boeing to account for lost military tanker capability associated with the delivery delays. According to program officials, the lost capability includes lost benefits—such as the Air Force not being able to grow the overall U.S. tanker fleet to 479 aircraft until later—and additional costs—such as the government having to maintain and sustain legacy aircraft and its test infrastructure longer than originally planned. The planned delivery of the first 18 aircraft, though 6 months late, will provide boom and drogue refueling capability to the warfighter. When delivered, the WARPs will enable the refueling of two receiver aircraft simultaneously, a capability that is not used as frequently, according to Air Force officials. Air Force officials said the current schedule and considerations will be part of a contract modification that is expected to be finalized in summer 2017. Figure 3 provides a closer look at the original and current delivery schedules. As shown, under the current schedule Boeing plans to deliver aircraft over a compressed 6-month period of time compared to its original plan to deliver aircraft over a 14-month period of time. This delivery period assumes Boeing will deliver 3 aircraft per month, a greater pace than planned during full rate production. According to program officials, Boeing is already in the process of manufacturing 18 aircraft from the first 3 low- rate production lots; 12 of these aircraft are over 70 percent complete. The current schedule also takes into account the decision by the Under Secretary for Acquisition, Technology and Logistics to designate productions lots 3 and 4 (of 15 aircraft each) as low-rate instead of full- rate lots. This was done to help Boeing avoid a break in production while it completes developmental and operational testing. The program expects to begin delivering these aircraft in 2018 and 2019, respectively. As a result, as shown in figure 4, concurrency between developmental flight testing and production has increased. The Air Force will have contracted for 49 aircraft before developmental flight testing is completed, representing 27 percent of the total aircraft, compared to the original plan of 19 aircraft, or about 11 percent. Further, the first 18 aircraft without WARPs will be delivered before most of operational testing has been completed. There is risk that Boeing may identify problems during flight testing that will lead to design changes. However, according to the terms of the development contract, the cost to fix these discoveries will be borne by Boeing, as it is required to bring all aircraft to the final configuration after completion of testing. Boeing faces two primary challenges in meeting the current delivery schedule, both of which relate to its developmental test schedule. Our analysis indicates that testing may take longer than the program is estimating. If test points are not completed at the planned rate, then aircraft deliveries will be delayed, indicating that the new delivery schedule is optimistic. Electromagnetic Effects Testing Schedule: First, there is risk that Boeing will not be able to complete required electromagnetic effects testing on the KC-46 in May 2017, as currently planned. Boeing officials stated this is because the WARP supplier has not yet provided all detailed design drawings to the Federal Aviation Administration for approval. While Boeing had planned on delivery of an approved WARP by March 2017, it now expects that to occur in late July 2017. The original plan, according to agency officials, was to have all aircraft parts, including the WARPs, conform to design drawings and gain Federal Aviation Administration approval prior to this testing. During the testing, the KC-46’s electrical systems will be examined to verify that they do not create any electromagnetic interference, a process that requires a unique government facility that is also in high demand by other programs. Consequently, Boeing officials report that if the KC-46 is not ready for its scheduled time, these critical tests could potentially be delayed until the facility is available. The program is working on ways to mitigate the potential for delays in the delivery of the first 18 aircraft. For example, program officials stated that they are considering separate electromagnetic testing on the aircraft and the WARPs. Flight Test Completion Rate: Second, Boeing is projecting that it can complete test points over the remaining developmental flight test schedule at a rate higher than it has been able to demonstrate consistently. If test points are not completed at the planned rate, then aircraft deliveries will be delayed. The developmental flight test program contains about 29,000 total test points to be completed over a 32-month period. Government test officials report that these test points are a combination of Boeing-specific tests that it is conducting to reduce the risk of test failure and government-specific tests to verify the KC-46’s performance. Boeing has completed 53 percent of planned testing since the KC-46 developmental flight test program began in January 2015. The company would need to complete an average of 1,713 test points per month to complete remaining testing on time so that it can begin delivering aircraft in September 2017. As shown in figure 5, Boeing has only completed this number of test points once, in October 2016, when it completed 2,240 test points, which program officials reported was part of a planned test surge. Boeing test data shows that from March 2016 to January 2017, it completed an average rate of 811 test points per month. As shown in figure 6, at that rate, we project that Boeing would finish the remaining 13,706 test points in early June 2018, 9 months later than the planned completion date. The Director for Operational Test and Evaluation has previously assessed and continues to assess the KC-46 schedule as aggressive and unlikely to be executed as planned, stating that execution of the current schedule assumes historically unrealistic test aircraft flight rates. Boeing’s test schedule is based on flying 65 flight test hours on 767-2C aircraft per month and 50 hours on KC-46 aircraft per month. The program has actually averaged—across all aircraft in the development test program— about 25 hours per aircraft per month. A government test official stated that similar programs in the past have sustained a pace of about 30 hours a month per aircraft. Government test officials noted that a large portion of testing completed so far was for Boeing-specific test points that could include tests that were cancelled if Boeing believed it had sufficient data already, and more time will likely be needed to plan and coordinate upcoming government-required testing. Boeing test officials believe the company can complete developmental testing by September 2017 because they plan to increase the number of test points it can complete per month by adding flight hours on nights and weekends. Boeing officials also believe the test pace will gain greater efficiency as the aircraft’s design and test plans stabilize. The program was working on a “test once” approach with Boeing, the Federal Aviation Administration, and DOD whereby common test activities required by multiple entities would only be performed once. According to program officials, Boeing is moving away from the test once approach and towards sequential testing as a mitigation strategy. They report that Boeing expects this will help it perform key tests more quickly because it will not need to wait for several systems to be approved for testing. Program officials, however, believe that the transition to a new testing approach will require weeks of test plan rewriting, and that obtaining approval for the design of all parts, including the WARPs, from the Federal Aviation Administration will continue to pose risk to test completion as currently planned. We are not making recommendations in this report. We provided a draft of this report to DOD for comment. DOD did not provide any written comments, but the KC-46 program office provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Air Force; and the Director of the Office of Management and Budget. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. The program office has 14 key performance parameters and system attributes that are critical to the KC-46 aircraft’s military capability and 7 technical performance capabilities that track progress to meeting contract specifications. Table 3 provides a description of each key performance parameter and system attribute. Table 4 provides the status of each technical performance capability. In addition to the contact named above, Cheryl Andrew, Assistant Director; Kurt Gurka; Stephanie Gustafson; Katheryn Hubbell; Nate Vaught; and Robin Wilson made key contributions to this report. KC-46 Tanker Aircraft: Challenging Testing and Delivery Schedules Lie Ahead. GAO-16-346. Washington, D.C.: April 8, 2016. KC-46 Tanker Aircraft: Key Aerial Refueling Capabilities Should Be Demonstrated Prior to the Production Decision. GAO-15-308. Washington, D.C.: April 9, 2015. KC-46 Tanker Aircraft: Program Generally on Track, but Upcoming Schedule Remains Challenging. GAO-14-190. Washington, D.C.: April 10, 2014. KC-46 Tanker Aircraft: Program Generally Stable but Improvements in Managing Schedule Are Needed. GAO-13-258. Washington, D.C.: February 27, 2013. KC-46 Tanker Aircraft: Acquisition Plans Have Good Features but Contain Schedule Risk. GAO-12-366. Washington, D.C.: March 26, 2012.
The KC-46 tanker modernization program, valued at about $44 billion, is among the Air Force's highest acquisition priorities. Aerial refueling—the transfer of fuel from airborne tankers to combat and airlift forces—is critical to the U.S. military's ability to effectively operate globally. The Air Force initiated the KC-46 program to replace about a third of its aging KC-135 aerial refueling fleet. Boeing was awarded a fixed price incentive contract to develop the first four aircraft, which are being used for testing. Among other things, Boeing is contractually required to deliver a total of 18 aircraft and 9 wing air refueling pod sets by August 2017. This is defined as required assets available. The program plans to eventually field 179 aircraft in total. The National Defense Authorization Act for Fiscal Year 2012 included a provision for GAO to review the KC-46 program annually through 2017. This is GAO's sixth report on this issue. It addresses (1) progress made in 2016 toward achieving cost, performance, and schedule goals and (2) development risk remaining. GAO analyzed key cost, schedule, development, test, and manufacturing documents and discussed results with officials from the KC-46 program office, other defense offices, the Federal Aviation Administration, and Boeing. The KC-46 tanker modernization program is meeting cost and performance targets, but has experienced some recent schedule delays. Costs: As shown in the table below, the program's total acquisition cost estimate has decreased about $7.3 billion, or 14 percent, since the initial estimate. This is primarily because there have been no requirements changes and there have been fewer engineering changes than expected. (then-year dollars in millions) Performance: The program office estimates that the KC-46 will achieve its key and technical performance capabilities, such as completing a mission 92 percent of the time. As noted below, though, much testing remains. Schedule: The program fixed design problems and was approved for low-rate initial production in August 2016, a year late. Boeing (the prime contractor) will not meet the original required assets available delivery schedule due to ongoing Federal Aviation Administration certifications of the aircraft, including the wing air refueling pods, and flight test delays. As shown, the remaining schedule was modified to allow Boeing to deliver the first 18 aircraft and pods separately by October 2018, 14 months later than first planned. GAO's analysis shows there is risk to the current delivery schedule due to potential delays in Federal Aviation Administration certifications and key test events. Boeing must also complete over 1,700 test points on average for each month from February to September 2017, a level that is more than double what it completed in the last 11 months. Program officials agree that there is risk to Boeing's test completion rate until it obtains Federal Aviation Administration approval for the design of all parts, including the pods, but test mitigation strategies are underway. GAO is not making recommendations.
Biological threats that could result in catastrophic consequences exist in many forms and arise from multiple sources. For example, several known biological agents could be made into aerosolized weapons and intentionally released in a transportation hub or other populated urban setting, introduced into the agricultural infrastructure and food supply, or used to contaminate the water supply. Concerned with the threat of bioterrorism, in 2004, the White House released HSPD-10, which outlines four pillars of the biodefense enterprise and discusses various federal efforts and responsibilities that help to support it. The biodefense enterprise is the whole combination of systems at every level of government and the private sector that can contribute to protecting the nation and its citizens from potentially catastrophic effects of a biological event. It is composed of a complex collection of federal, state, local, tribal, territorial, and private resources, programs, and initiatives, designed for different purposes and dedicated to mitigating various risks, both natural and intentional. The four pillars of biodefense outlined in HSPD-10 are (1) threat awareness, (2) prevention and protection, (3) surveillance and detection, and (4) response and recovery. Protecting humans, animals, plants, air, soil, water, and critical infrastructure from potentially catastrophic effects of intentional or natural biological events entails numerous activities carried out within and between multiple federal agencies and their nonfederal partners. Figure 1 shows the four pillars of biodefense, examples of some federal efforts that can support them, and federal agencies responsible for those efforts. The BioWatch program falls under the surveillance and detection pillar. It is an example of an environmental monitoring activity. DHS, in cooperation with other federal agencies, created the BioWatch program in 2003. The goal of BioWatch is to provide early warning, detection, or recognition of a biological attack. When DHS was established in 2002, a perceived urgency to deploy useful—even if imperfect—technologies in the face of potentially catastrophic consequences catalyzed the rapid deployment of many technologies, including the earlier generations of BioWatch collectors. In the initial deployment of BioWatch collectors—known as Generation 1—DHS deployed detectors to 20 major metropolitan areas, known as BioWatch DHS completed this jurisdictions, to monitor primarily outdoor spaces.initial deployment quickly—within 80 days of the President’s announcement of the BioWatch program during his 2003 State of the Union Address. To accomplish this quick deployment, DHS adapted an existing technology that was already used to accomplish other air monitoring missions. In 2005, DHS expanded BioWatch to an additional 10 jurisdictions, for a total of 30. This expanded deployment—referred to as Generation 2 (Gen-2)—also included the addition of indoor monitoring capabilities in three high-threat jurisdictions and provided additional capacity for events of national significance, such as major sporting events and political conventions. Currently, the BioWatch program collaborates with 30 BioWatch jurisdictions throughout the nation to operate approximately 600 Gen-2 collectors. These detectors rely on a vacuum-based collection system that draws air samples through a filter. These filters must be manually collected and transported to state and local public health laboratories for analysis using a process called Polymerase Chain Reaction (PCR). During this process, the sample is evaluated for the presence of genetic material from five different biological agents. If genetic material is detected, a BioWatch Actionable Result (BAR) is declared. Using this manual process, the determination of a BAR can occur from 12 to 36 hours after an agent is initially captured by the air filter. This 36-hour timeline consists of up to 24 hours for air sampling, up to 4 hours for sample recovery, and up to 8 hours for laboratory testing. Each BioWatch jurisdiction has either a BioWatch Advisory Committee or equivalent decision making group in place, composed of public health officials, first responders, and other relevant stakeholders. The BioWatch Advisory Committee is responsible for the day-to-day BioWatch operations, including routine filter collection and laboratory analysis of filter samples. In the event of a BAR, the BioWatch Advisory Committee is also responsible for determining whether that BAR poses a public health risk and deciding how to respond. The declaration of a BAR does not necessarily signal that a biological attack has occurred, as the Gen-2 detection process is highly sensitive and can detect minute amounts of pathogens that naturally occur in the environment. For example, at least two of the agents the program monitors occur naturally and have been detected in numerous areas of the United States. Since 2003, more than 100 BARs have been declared according to BioWatch program officials, but none were determined to be a potential risk to public health. Figure 2 shows the process that local BioWatch jurisdictions are to follow when deciding how to respond to a BAR. To reduce the time required to detect biological pathogens, DHS has been pursuing an autonomous detection capability for the BioWatch program. Envisioned as a laboratory-in-a-box, the autonomous detection system that DHS seeks would automatically collect air samples, produce and read PCR results every 4 to 6 hours, and communicate the results to public health officials without manual intervention. By automating the analysis, DHS anticipates that detection time could be reduced to 6 hours or less, making the technology more appropriate for monitoring indoor high-throughput facilities such as transportation nodes. DHS also anticipates that operational costs will be reduced through the elimination of the daily manual collection and laboratory analysis process. Developing autonomous detection has proved challenging according to BioWatch program officials, in part because some of the technology required was novel, but also because even the existing technologies—for example, the air collection system and the apparatus that reads the PCR results—had not been combined for this specific application in an operational environment before. As shown in figure 3, DHS began to develop autonomous detection technology in 2003. Initially, development of technologies to support autonomous detection was led by DHS’s Science and Technology Directorate (S&T), which partnered with industry. Since fiscal year 2007, DHS’s OHA has been responsible for overseeing the acquisition of this technology. In its 2011 report, the National Academies reported that the proposed enhancements to the BioWatch system will be possible only if significant scientific and technical hurdles are overcome. Similarly, as recently as March 2012, DHS’s Assistant Secretary for Health Affairs testified that the Gen-3 technology has been challenging to develop. The overall policy and structure for acquisition management outlined in DHS’s AMD 102-01 includes the department’s Acquisition Life-cycle Framework—a template for planning and executing acquisitions. According to the directive, DHS adopted the Acquisition Life-cycle Framework to ensure consistent and efficient acquisition management, support, review, and approval throughout the department. As we have previously reported, without the development, review, and approval of key acquisition documents, agencies are at risk of having poorly defined requirements that can negatively affect program performance and contribute to increased costs. As shown in figure 4, DHS’s Acquisition Life-cycle Framework includes four acquisition phases through which DHS determines whether it is sensible to proceed with a proposed acquisition: (1) identify a capability need; (2) analyze and select the optimal solution to meet that need; (3) obtain the solution; and (4) produce, deploy, and support the solution. During the first three phases, the DHS component pursuing the acquisition is required to produce key documents in order to justify, plan, and execute the acquisition. These phases each culminate in an Acquisition Decision Event (ADE), where the Acquisition Review Board— a cross-component board of senior DHS officials—determines whether a proposed acquisition has met the requirements of the relevant Acquisition Life-cycle Framework phase and is able to proceed. The Acquisition Review Board is chaired by the Acquisition Decision Authority—the official responsible for ensuring compliance with AMD 102-01. For the Gen-3 acquisition, DHS’s Deputy Secretary serves as the Acquisition Decision Authority. DHS held an Acquisition Review Board related to ADE-2B on August 16, 2012, during which the BioWatch program was seeking approval to initiate the next phase of the acquisition.authority did not make a final ADE-2B decision, but did authorize the program to issue a solicitation for performance testing under the next testing phase. The Acquisition Decision Authority also required that the program office return to the Acquisition Review Board for approval prior to issuing a performance testing contract—which would allow the program to acquire a small number of test units. Furthermore, before undertaking the remaining steps in the acquisition, the program office must return to the Acquisition Review Board for ADE-2B with updated information, including an Analysis of Alternatives and Concept of Operations. DHS has not specified a time frame for completing these actions, but according to DHS officials, completing the Analysis of Alternatives may take up to 1 year. DHS approved the Gen-3 acquisition in October 2009, but it did not fully engage in the early phases of its acquisition framework to ensure that the acquisition was grounded in a justified mission need and that it pursued an optimal solution—for example, DHS did not fully develop a Mission Needs Statement or an Analysis of Alternatives with a cost-benefit analysis, as called for in its Acquisition Life-cycle Framework. DHS skipped Phase 1 of the Acquisition Life-cycle Framework for the Gen-3 acquisition.Mission Needs Statement later to justify a predetermined solution. According to DHS’s acquisition lifecycle framework, in Phase 1, the program office is to develop a Mission Needs Statement to make a case to decision makers that the acquisition represents a justified need that Specifically, it did not hold ADE-1 and prepared a warrants the allocation of limited resources. At the end of Phase 1, the Acquisition Decision Authority is to review the Mission Needs Statement and other information during ADE-1 and decide whether the need is of sufficiently high priority to continue with the acquisition. However, according to BioWatch program officials, the Gen-3 acquisition began at ADE-2A, which is intended to be the decision gate at the end of Phase 2 of the Acquisition Lifecycle Framework. The Mission Needs Statement was finalized on October 6, 2009, just weeks before ADE-2A. As shown in figure 5, DHS began to pursue a specific autonomous detection solution well before completing a Mission Needs Statement. Specifically, DHS’s Integrated Planning Guidance (IPG) for fiscal year 2010-2014, which was finalized in March 2008, included very specific goals for the next generation of BioWatch—to deploy in all major cities an autonomous BioWatch detection device reducing the operating cost per site by more than 50 percent and warning time to less than 6 hours. The purpose of DHS’s IPG is to communicate the Secretary’s policy and planning goals to component-level decision makers to inform their programming, budgeting, and execution activities. As such, this specific set of goals for BioWatch Gen-3 demonstrates that DHS leadership had established a course for the acquisition by March 2008, in advance of any efforts to define the mission need through the Mission Needs Statement process, which was finalized more than a year and a half later. BioWatch program officials said they were directed by DHS to prepare the Mission Needs Statement—along with other required documentation for the ADE-1 and ADE-2A decision gates—on an accelerated time frame of about 6 weeks to prepare for the ADE-2A decision, efforts that they said would typically require at least 8 months. According to these officials, they were aware that the Mission Needs Statement prepared for ADE-2A did not reflect a systematic effort to justify a capability need. Although such an effort would provide a platform to help make trade-off decisions in terms of costs, risks, and benefits throughout the remainder of the acquisition process, officials said the time they were given would not have allowed for such an effort. They said that the department directed them to proceed because there was already departmental consensus around the solution. Moreover, in its fiscal year 2009 budget request, submitted in February 2008, DHS requested funding to procure BioWatch automated detection sensors and initiate deployment activities of the automated sensor system. These funds, requested more than 18 months prior to the acquisition’s formal approval at ADE-2A—were intended to fund operational testing activities for Gen-3 BioWatch prototypes as well as the procurement of 150 automated detection sensors that DHS sought to deploy as an interim solution until the full Gen-3 acquisition could be completed. A prototype version of this interim solution was first fielded in 2007, shortly after DHS’s OHA assumed responsibility for the program. Limited documentation is available to reflect the decision making process that occurred before the October 2009 Mission Needs Statement was finalized, including decisions related to the very specific IPG goals, the pursuit of funding for Gen-3, and the deployment of an interim solution before undertaking an effort to establish a justified mission need. We interviewed multiple officials in various DHS offices who had knowledge of Gen-3 in this early decision making period and the process that DHS used to justify the need to acquire Gen-3. However, none of these officials could describe what processes, if any, the department followed to determine that Gen-3 was a justified need. On the other hand, these officials all described a climate, in the wake of the September 11, 2001, terrorist attacks and the subsequent Amerithrax attacks, in which the highest levels of the administration expressed interest in quickly deploying the early generation BioWatch detectors and subsequently improving the functionality of these detectors—as quickly as possible—to allow for faster detection and an indoor capability. On the basis of this interest, officials from the multiple DHS offices said it was their understanding that the administration and departmental leadership had already determined that the existing BioWatch technology would need to be expanded and entirely replaced with an autonomous solution well before the acquisition was approved at ADE-2A. DHS guidance states that the Mission Needs Statement should consider the IPG, but it also directs the program to focus on the capability need without specifying a specific technical solution. The Mission Needs Statement is designed to serve as the foundational document upon which subsequent Acquisition Life-cycle Framework efforts are based. As such, a Mission Needs Statement that focuses on the capability need can help articulate and build consensus around the goals and objectives for a program in a way that provides a touchstone throughout the rest of the acquisition processes as the program endeavors to identify optimal solutions and contends with technology, budget, schedule, and risk realities. The Gen-3 Mission Needs Statement prepared for ADE-2A, in response to the very specific solution set prescribed by DHS leadership in the IPG, asserted a specific technological solution—total replacement of the existing vacuum-based, manual technology with autonomous detectors—as the only viable solution. Because the Mission Needs Statement was completed after DHS had prescribed specific goals for Gen-3 in the IPG and requested funding to field an interim solution, it appears to be justification for a predetermined solution, rather than the deliberate and systematic consideration of capability needs that would serve as the foundation for the remaining acquisition processes. As such, its utility as a foundation for subsequent acquisition efforts—for example identifying an optimal solution and balancing mission requirements with budget, schedule, and risk considerations—was limited. DHS did not use the processes in Phase 2 of the Acquisition Life-cycle Framework to systematically identify the optimal solution based on cost- benefit and risk information. We have long advised that DHS make risk- informed investments of its limited resources. For example, in February 2005, we reported that because the nation cannot afford to protect everything against all threats, choices must be made about protection priorities given the risk and how to best allocate available resources. More recently, we reported in September 2011 that because DHS does not have unlimited resources and cannot protect the nation from every conceivable threat, it must make risk-informed decisions regarding its homeland security approaches and strategies. Phase 2 of the DHS Acquisition Life-cycle Framework is intended to support these kinds of trade-off decisions by requiring DHS components to complete an Analysis of Alternatives that systematically identifies possible alternative solutions that could satisfy the identified need, considers cost-benefit and risk information for each alternative, and finally selects the best option from among the alternatives. The Analysis of Alternatives is intended to provide assurance to DHS at the ADE-2A decision gate that the component has chosen the most cost- effective solution to mitigate the capability gap identified in the Mission Needs Statement. To provide this assurance and allow DHS to make trade-off decisions, the guidance states that developing the Analysis of Alternatives should be a systematic analytic and decision making process to identify and document an optimal solution that includes an understanding of the costs and benefits of at least three viable alternatives. The guidance directs the program to compare the alternatives based on cost, risk, and ability to respond to identified capability gaps. Finally, the guidance calls for an independent entity to complete this analysis to ensure that it is done objectively and without bias or vested interest in the study’s outcome. The Gen-3 Analysis of Alternatives, completed in conjunction with the Mission Needs Statement by the BioWatch Program Office, does not reflect a systematic analytic and decision making process. Instead, the Analysis of Alternatives, like the Mission Needs Statement, was designed to support the decision the department had already made to pursue autonomous detection. The Analysis of Alternatives maintained that no modifications to the existing system would satisfy the goals in the IPG, and as such, it concluded that replacing the deployed technology entirely with autonomous detectors was the only viable solution. Along these lines, the Analysis of Alternatives included two alternatives: 1. expanding program coverage within current BioWatch cities and additional cities by replacing all currently deployed detectors with autonomous detection technology, and; 2. undertaking the same expansion using the currently deployed detectors but modifying the procedures to allow for filter collections every 8 hours instead of every 24 hours (a time frame that by definition would not meet the specific goals of the IPG). The analysis did provide some cost information for each alternative, but it did not fully explore costs or consider benefit and risk information. As with the Mission Needs Statement, program officials told us that they were advised that a comprehensive Analysis of Alternatives would not be necessary because departmental consensus already existed that autonomous detection was the optimal solution. In discussing the cost trade-offs between the autonomous detection solution and more frequent filter collection using the currently deployed technology in the Analysis of Alternatives, DHS focused on cost per detection cycle—that is, the cost each time an autonomous detector tests the air for pathogens or the cost each time a Gen-2 filter is manually collected and tested in a laboratory. According to our analysis of the June 2011 Life-Cycle Cost Estimate, cost per detection cycle is estimated to be seven times lower with the Gen-3 technology than the cost per detection cycle based on historical data for Gen-2 detectors. However, by only considering the cost per detection cycle of the two alternatives, the analysis does not help ensure the pursuit of an optimal solution based on cost, risk, and capability—as called for in the guidance. To help ensure the optimal solution, DHS could benefit from a more complex and nuanced cost analysis that considers a number of factors in addition to the cost per detection cycle. For example, although not a cost- effectiveness analysis, table 1 shows that deploying and operating Gen-3 detectors is not necessarily more affordable than the existing Gen-2 deployment. According to our analysis, the total annual cost to operate Gen-3 is estimated to be about four times more than the cost of the existing Gen-2 deployment. The higher cost reflects both higher annual operating costs per detector and an increase in the number of detectors and jurisdictions covered. In addition to a limited cost analysis, the Gen-3 Analysis of Alternatives contained no analysis of benefits. In fact, it did not identify any benefits of investment beyond the assumption—inherent in its focus on increasing the number of detection cycles per day—that earlier detection has the potential to save lives and limit economic loss, a basic and accepted principle for all enhanced surveillance efforts. In selecting the optimal solution, other costs and benefit factors like the examples shown in table 2 could have been helpful. Identifying benefits and conducting a more complete analysis of cost and benefits would help DHS develop the kind of information that would inform tradeoff decisions and the selection of an optimal solution. For example, DHS plans to deploy about four times as many Gen-3 detectors than Gen-2 detectors, and each Gen-3 detector will test the air more frequently, so the Gen-3 deployment plan will increase the depth and range of the coverage to be provided. Specifically, Gen-3 is expected to cover 90 percent of the population in the jurisdictions where it is deployed, as opposed to 65 percent with the currently deployed technology. On the basis of the estimated annual program cost and the percentage of U.S. population covered, we calculated that the estimated Gen-3 deployment will cost about $4 million annually, on average, for each 1 percent of population covered within BioWatch jurisdictions (about $2.7 million more for each 1 percent covered than Gen-2). Moreover, according to Biowatch program officials, this kind of calculation may actually underestimate the cost of increasing population coverage by deploying more detectors, because the relationship between number of detectors and population covered is not linear. However, because the Gen-3 Analysis of Alternatives does not include a discussion of benefits, or a cost-benefit analysis, it does not consider the extent to which expanding the population covered under the proposed Gen-3 deployment would contribute to a reduction of risk and at what cost. In its evaluation of BioWatch and public health surveillance, the National Academies stated that the BioWatch program should not expand its coverage of biological agents or jurisdictions without a clear understanding of the change’s contribution to reducing mortality or morbidity in conjunction with clinical case finding and public health. However, without a discussion of benefits, the Analysis of Alternatives could not help DHS develop this understanding before approving the acquisition at ADE-2A. In June 2011, the program office commissioned a study from Sandia National Laboratories that began to develop a basis for this kind of understanding of benefits related to public health outcomes. However, critical information related to both the costs and the benefits of the planned Gen-3 approach remains to be explored. DHS has commissioned the Homeland Security Institute (HSI) to conduct an independent study that has as its overarching objective the characterization of the state of Gen-3 technology—that is, whether it is mature enough to continue as an acquisition or whether it needs additional development work. As part of that study, DHS has asked HSI to consider, among other things, (1) whether the threat is adequately described, (2) whether it is possible to determine costs and benefits, and (3) to what extent prior studies like the Sandia study have been validated and used to inform plans for Gen-3 deployment. DHS told us that the study would be completed by September 1, 2012. As of early-September 2012, DHS has not provided us with a copy of the study or responded to requests to provide an updated timeline for the study. Beyond the uncertainty related to the costs and benefits of the planned Gen-3 approach, there is additional uncertainty about the benefit of this kind of environmental monitoring because as a risk mitigation activity, it has a relatively limited scope. As the study committee for the National Academies evaluation of BioWatch noted, there is considerable uncertainty about the likelihood and magnitude of a biological attack, and how the risk of a release of an aerosolized pathogen compares with risks from other potential forms of terrorism or from natural diseases. The report also notes that while the BioWatch program is designed to detect certain biological agents (currently five agents) that could be intentionally released in aerosolized form, detecting a bioterrorism event involving other pathogens or routes of exposure requires other approaches. Given the higher total estimated operating cost for the Gen-3 program, it is important, especially in an increasingly resource-constrained environment, to also consider the benefit—in terms of its ability to mitigate the consequences of a potentially catastrophic biological attack—that the extra investment provides. These scope limitations provide context in both the consideration of mission need and in analyzing cost effectiveness. Because the Gen-3 Analysis of Alternatives focuses on justifying total replacement of Gen-2 technology with an autonomous detection technology, it did not explore whether another solution might be more effective. For example, according to BioWatch program officials, it is possible that other options—including but not limited to deployment of some combination of both technologies, based on risk and logistical considerations—may be more cost-effective. Along these lines, program officials told us that in 2011, to help them manage various budget contingencies, they prepared a summary of available deployment options for Gen-3 that includes a mixed deployment of Gen-3 and Gen-2 units. However, a more comprehensive solution set was not available to be considered at ADE-2A; nor has such an effort since been undertaken to inform investment and trade-off decisions at the departmental level. Given the uncertainty related to the costs, benefits, and risk mitigation potential of Gen-3, DHS does not have reasonable assurance that the strategy of expanding and completely replacing the existing Gen-2 technology with an autonomous detection technology provides the most cost-effective solution. In October 2009, DHS approved the Gen-3 acquisition at ADE-2A— based on the information contained in acquisition documents provided by the BioWatch program—authorizing the BioWatch program to proceed with characterization testing of Gen-3 candidate technologies. One critical purpose of the ADE-2A documentation set required by DHS’s acquisition guidance is to describe the expected performance, cost, and schedule parameters for an acquisition. We reported in June 2010 that stable parameters for performance, cost, and schedule are among the factors that are important for successfully delivering capabilities within cost and schedule expectations. We also reported in May 2012 that without the development, review, and approval of key acquisition documents, agencies are at risk of having poorly defined requirements that can negatively affect program performance and contribute to increased costs. However, the ADE-2A Acquisition Decision Memorandum stated that significant data necessary for the proper adjudication of an ADE-2A decision were missing. Specifically, it noted that the ADE-2A documentation set did not contain three required documents, including: (1) a Concept of Operations—intended to provide critical information on how an acquisition will function in the operational environment, (2) an Integrated Logistics Support Plan—intended to document how an acquisition will be supported and sustained through its life-cycle, and (3) a Life-Cycle Cost Estimate—intended to provide a credible estimate of the life-cycle cost of the acquisition. Additionally, we found that certain information contained in the ADE-2A documentation set on operational requirements, schedule projections, and cost were not developed using reliable methods as discussed later in this report. For more information on the limitations of Gen-3 acquisition documents and processes at ADE-2A, see appendix I. As was the case for the Mission Needs Statement and the Analysis of Alternatives, BioWatch program officials stated that they had to prepare ADE-2A documentation quickly because ADE-2A had been accelerated by 14 months. However, in the absence of complete and reliable information, DHS had limited assurance that the acquisition would successfully deliver the intended capability within cost and on schedule. Nevertheless, the Deputy Secretary approved the acquisition, but she required the program office to provide quarterly progress updates. On the basis of the Gen-3 documentation submitted at ADE-2A, DHS expected to acquire a system that would cost $2.1 billion, be fully deployed by fiscal year 2016, and meet certain performance requirements. As shown in table 3, as of July 2012, the performance, schedule, and cost parameters for the Gen-3 acquisition are significantly different from the parameters DHS expected when it approved the acquisition at ADE-2A. Regarding performance expectations, the BioWatch program submitted a revised Operational Requirements Document to DHS for approval that includes a proposed revision to the key performance parameter for system sensitivity— the amount of a pathogen that would have to be present in the air for the system to detect its presence. DHS acquisitions guidance requires components to develop key performance requirements that an acquisition must meet in order to fulfill the program’s fundamental purpose and close the capability gap(s) identified in the Mission Needs Statement, and to document these requirements in an Operational Requirements Document. However, BioWatch program officials told us that the original sensitivity requirement was based on what DHS thought the technology could theoretically achieve, and was not informed by a scientific and risk-informed assessment of what level of sensitivity would be needed—from an operational perspective—to fulfill the Gen-3 purpose of mitigating consequences in the event of a biological attack. Additionally, the process used to set the sensitivity requirement did not reflect stakeholder consensus about how to balance mission needs with technological capabilities. Specifically, the BioWatch program did not prepare a Concept of Operations before ADE-2A. According to DHS acquisitions guidance, in developing a Concept of Operations, stakeholders engage in a consensus-building process regarding how to balance technological capabilities with mission needs in order to gain consensus on the use, capabilities, and benefits of a system. Because DHS did not prepare a Concept of Operations before establishing operational requirements, the sensitivity requirement did not reflect broad stakeholder engagement in balancing schedule, cost, and risk realities with achieving a specified mission outcome—for example, a specific level of population protection. During characterization testing, the candidate technology tested was unable to meet the original sensitivity requirement. According to the September 2011 Operational Assessment, the system sensitivity demonstrated during characterization testing was orders of magnitude lower than the original requirement, meaning that a significantly greater concentration of a pathogen than specified in the requirement would have to be present in the air to trigger detection. According to BioWatch program officials, the original sensitivity requirement was set based on interest in pushing the limits of potential technological achievement rather than in response to a desired public health protection outcome. They said that this led to a requirement that may have been too stringent, resulting in higher costs and schedule delays without demonstrated mission imperative. Because DHS did not ground the sensitivity requirement in Gen-3 program goals, when the candidate technologies were unable to meet the requirement, DHS encountered delays and uncertainty about how to move forward. In response to these concerns, the BioWatch program directed Sandia National Laboratories to evaluate the level of system sensitivity that would be necessary for the Gen-3 program to fulfill its fundamental purpose. The study, which was completed in January 2012, contained findings that, according to BioWatch Program officials, confirm that the sensitivity requirement could be relaxed without significantly affecting the program’s public health mission. In response to this study, the BioWatch program submitted an updated Operational Requirements Document with a revised sensitivity requirement to DHS in March 2012 for approval in preparation for ADE-2B, as shown in figure 6. The need to reevaluate the sensitivity requirement for the Gen-3 acquisition has contributed to delays in the acquisition schedule. For example, in August 2011, the BioWatch program requested to postpone the ADE-2B, scheduled for September 2011, until December 2011 to give the program time to address the testing issues associated with the sensitivity requirement. Given that the Sandia study was not available until January 2012, the program office again requested that ADE-2B be delayed until March or April 2012. As of September 2012, DHS has not approved the revised sensitivity requirement and plans to revisit that decision at the next Acquisition Review Board for ADE-2B. DHS acquisition guidance states that the accurate definition of requirements is imperative if an acquisition is to be completed within schedule constraints and still meet the component and department’s mission performance needs. It follows that these schedule delays could have been mitigated if the original sensitivity requirement had been more realistically set using scientific and risk information to ensure that it aligned with the mission need of the program and balanced mission goals with technological feasibility. In addition to the impact that changing the sensitivity requirement had on the acquisition schedule, the change in schedule expectations since October 2009 can also be explained by DHS not employing reliable schedule estimation methods to produce the schedule estimate that was submitted with the Acquisition Program Baseline in the ADE-2A documentation set. Our prior work has found that realistic acquisition program baselines with stable requirements for cost and schedule are among the factors that are important to acquisitions successfully delivering capabilities within cost and schedule constraints. However, BioWatch program officials told us that they set the ADE-2A schedule estimate aggressively because there was pressure to respond quickly to the call to deploy autonomous detection. Additionally, they reported that they did not account for risk in the schedule estimates that were included in the Acquisition Program Baseline for ADE-2A. The BioWatch program office has revised the acquisition schedule since ADE-2A was held in 2009. The most recent update—completed in January 2012—estimated full deployment of the Gen-3 system in fiscal year 2022, 6 years later than anticipated. While the acquisition is currently on track with the January 2012 schedule, the schedule remains subject to uncertainty, in part because of a pending decision about the acquisition strategy. GAO-09-3SP. to engage in a full effort to develop a Life-Cycle Cost Estimate in accordance with the GAO Cost Estimating Guide ahead of ADE-2A, but were directed by the department to proceed with the best point estimate they could derive. Additionally, both BioWatch program and PARM officials described a climate before ADE-2A in which the department’s business processes—including acquisition practices—were maturing and thus were less rigorous in their adherence to best practices for cost and schedule estimating. The BioWatch program has revised the cost estimate using more reliable methods since the ADE-2A estimate was prepared in 2009. The most recent update—completed in June 2011—shows the estimated life-cycle cost for the Gen-3 acquisition to be $5.8 billion (80 percent confidence), much higher than the $2.1 billion point estimate presented at ADE-2A. The 2011 Life-Cycle Cost Estimate was aligned with GAO’s Cost Estimating Guide, which recommends that agencies calculate a range of possible cost estimates based on different risk levels in order to account for uncertainty. According to the guide, experts agree that program cost estimates should be budgeted to at least the 50 percent confidence level, but budgeting to a higher level (for example, 70 percent to 80 percent) is now a common practice. Moreover, a higher confidence level in cost estimating may be more prudent, as experts stress that contingency reserves are necessary to cover increased costs resulting from unexpected design complexity, incomplete requirements, technology uncertainty, and other uncertainties that can affect programs, according to the GAO Cost Estimating Guide. Acknowledging the benefit of a higher confidence level for cost estimates, the BioWatch program recommended that the 80 percent confidence level estimate be used for planning purposes. As such, the $5.8 billion figure presented in the 2011 cost estimate was calculated at the 80 percent confidence level—meaning that there is an 80 percent chance that the actual life-cycle cost will be this amount or less, according to BioWatch officials. BioWatch program officials told us that the large difference between the ADE-2A cost estimate and the June 2011 cost estimate is primarily driven by the inclusion of risk in the June 2011 estimate, rather than by changes to the program. However, these officials also noted other factors that contributed to the difference. For example, the 2009 estimate was not as robust as the 2011 estimate because it was not based on the work breakdown structure for the program. Additionally, because of changes in the schedule estimates, the June 2011 estimate considers costs through fiscal year 2028, whereas the 2009 estimate considered costs through fiscal year 2020. These changes in performance, schedule, and cost, along with maturation in the department’s acquisition management process, create an opportunity for DHS to reevaluate the mission need and alternatives in a more comprehensive and systematic fashion, and in accordance with DHS acquisitions guidance, to help ensure that it invests its limited resources in the most cost-effective solution possible. In addition, using comprehensive and systematically developed information, in conjunction with good practices for cost and schedule estimating like those described in the GAO Cost Estimating Guide, could help ensure that the department and policymakers have the most reliable performance, schedule, and cost information available for decision making. According to DHS officials the remaining steps in the Gen-3 acquisition include performance testing, operational testing and evaluation, production, deployment, and sustainment. Figure 7 shows the timeline, based on the January 2012 Acquisition Program Baseline and discussions with BioWatch program officials, for the remaining steps to deploy and operate Gen-3. First, DHS plans to issue a solicitation for performance testing in the next testing phase, but the Acquisition Review Board must provide approval before the program awards a contract. In addition, final ADE-2B approval will be required for the remaining acquisition steps, including operational testing and evaluation. In preparation for ADE-2B, the BioWatch program has updated key acquisition documents—including the Life- Cycle Cost Estimate and Acquisition Program Baseline— as required by the Acquisition Decision Authority in a February 2012 memo. In order to inform the ADE-2B decision, these documents must accurately reflect changes to Gen-3 performance requirements and updated cost and schedule estimates for the acquisition and therefore may require further revisions. For more information on characterization test events and results, see appendix II. environment remain unverified, and failure to demonstrate this capability may seriously inhibit user confidence in the system. Results from operational test and evaluation will be used to inform ADE-3, which, if approved would authorize full-rate production and deployment of Gen-3. DHS’s Acquisition Life-cycle Framework requires that the BioWatch program provide proof that the technology satisfies the operational requirements. To ensure that the full system satisfies the operational requirements, the BioWatch program intends to design a testing plan that demonstrates that the full system—including the information technology network when it is developed—can operate as intended, while complying with legal restrictions on testing for pathogens in BioWatch jurisdictions. DHS has not yet finalized a testing strategy, and the final test plan will depend on the candidate technologies chosen for testing following ADE-2B. Whatever the strategy, DHS officials from the BioWatch program and the Science and Technology Directorate office that oversees testing said that operational test and evaluation will include a number of subsystem and full system test events from which performance in an operational environment can be modeled and extrapolated. Table 4 provides examples of possible test events to demonstrate Gen-3 performance. Collectively, the BioWatch program estimates that this testing will take approximately 3 years and cost approximately $89 million. During operational testing and evaluation, the BioWatch program must prepare for and mitigate several limitations. These limitations include the following: Inability to fully test Gen-3’s detection capability: BioWatch officials told us that legal restrictions on the aerosolized release of all five BioWatch agents in U.S. cities limit the BioWatch program’s ability to demonstrate full and subsystem performance in an operational environment. Without releasing the agents in BioWatch jurisdictions, the BioWatch program is unable to test the system’s ability to detect them in the operational environment. According to BioWatch program officials and DHS S&T officials who assist with test design, designing laboratory and field tests that can compensate for these limitations on pathogen use is a goal that is guiding the development of the testing plan for Operational Testing and Evaluation. Inconsistent performance in different operational environments: The candidate system tested during the characterization field test performed better at some sites than others. Specifically, detectors located on underground subway platforms had higher incidences of malfunction than detectors in other locations. These malfunctions may be associated with the presence of metallic brake dust; however, this failure demonstrates different operational environments pose different challenges, and the BioWatch program plans to conduct laboratory testing as well as modeling to further assess detector performance under different operational conditions. Difficulty verifying false positive rate: In order to build user confidence in the system, the BioWatch program has established a stringent threshold of 1 in 10 million for the false positive rate—that is, the rate at which the system is allowed to indicate a pathogen is present when one is not. However, according to BioWatch documentation, 33.5 years of operational testing would be required to fully demonstrate that the system meets the established false positive rate. Therefore, the BioWatch program plans to use data from laboratory testing to model and extrapolate the probability of a false positive. According to program documentation, the amount of time planned for operational testing will be sufficient to reveal any issues with false positive performance of the candidate technologies tested. The goal of the next phase of testing is to demonstrate that Gen-3 candidate technologies can operate as intended in the operational environment. To achieve that goal, which is required for ADE-3, the BioWatch program must successfully mitigate these testing limitations. For example, to address the inconsistent performance in testing environments, the program must determine whether and how to adjust laboratory conditions to better reflect the operational environment by exposing the detectors to contaminants such as dust and pollen. To be ready to produce and deploy Gen-3, DHS must demonstrate technological readiness for the full system based on both individual component readiness and the maturity of the integration of those components. In August 2011, on the basis of results of characterization testing, the Institute for Defense Analysis conducted a Technology Readiness Assessment—a formal independent review that assesses the maturity of critical hardware and software technologies to be used in systems—for Gen-3. Using the Department of Defense’s (DOD) Technology Readiness Level (TRL) scale, which defines levels of technological maturity on a scale of 1 to 9, the assessment assigned TRL scores to the Gen-3 candidate technology’s individual critical technology elements, that is, those subsystems that are vital to the functioning of the system and are either new or novel applications or pose major technical risk. This assessment rated all but one of the critical technology elements it assessed as TRL 7—indicating a relatively high level of maturity for each technology element assessed. However, the assessment does not provide an overall TRL for the full Gen-3 system. It notes that doing so could obscure the strengths and weaknesses of individual system components, and says that the DOD’s Technology Readiness Assessment Deskbook, which provides guidance for assigning TRLs, does not describe how to aggregate TRLs. However, other DOD guidance specific to chemical and biological defense says that a TRL evaluation is generally undertaken to establish a system’s level of maturity relative to a specific purpose, which suggests that the next phase of testing should result in a technology readiness assessment that provides an indication of how well these components perform together in order to meet the mission need of autonomous detection. Furthermore, we have previously reported that underestimating the complexity of systems integration can be a cause of significant cost and schedule growth. DHS also has not assessed the technology readiness of the data network, a major component of the Gen-3 system, or its integration into the system because it has not yet been developed. The data network and its integration will therefore require demonstration prior to production and deployment of Gen-3. If the BioWatch program can demonstrate that the candidate technology meets requirements and DHS approves the Gen-3 acquisition at ADE-3, the DHS June 2011 life-cycle cost estimate indicates that Gen-3 is expected to cost $5.8 billion (80 percent confidence) through June 2028. As shown in figure 8, approximately $5.7 billion of this total has not yet been spent and is expected to primarily fund operations once the system is deployed. To prepare for the deployment of Gen-3, the BioWatch program must work with Gen-3 jurisdictions to prepare sites for detector placement and to develop location-specific Concepts of Operations to provide key information and considerations—such as specifying roles and responsibilities and developing public information and risk communication messages—that are integral to response operations in the event that Gen-3 detects a pathogen. Like the Gen-2 system, the Gen-3 system is to be operated by BioWatch jurisdictions, and therefore the system’s usefulness in improving response time is expected to be determined, in part, by each jurisdiction’s willingness to respond to a positive test result, which, if incorrect could have large monetary costs and public and political repercussions. According to BioWatch program officials, they want the jurisdictions to have enough confidence in the system that they are willing to take action based on positive results from a Gen-3 detector without confirmatory laboratory testing. Therefore, according to BioWatch program officials, they have taken steps to increase jurisdictions’ confidence in the Gen-3 system. For example, they provide guidance to jurisdictions and are in the process of developing a quality assurance process to track system performance. Furthermore, these officials anticipate running Gen-2 and Gen-3 concurrently for up to 6 months in BioWatch jurisdictions, and requiring all candidate technologies to archive positive samples so that the jurisdictions can run confirmatory laboratory analysis on the samples. Despite Gen-3’s potential to save lives under specific conditions, uncertainty remains about its general risk mitigation value. DHS established the strategy to quadruple the number of deployed detectors and replace all Gen-2 technology with an autonomous solution while expanding to 20 additional cities without engaging in a robust mission needs effort to serve as a foundation for subsequent acquisition efforts. As we have previously reported, because DHS does not have unlimited resources and cannot protect the nation from every conceivable threat, it must make risk-informed decisions regarding its homeland security approaches and strategies. In addition, we have previously reported that programs that conduct a limited assessment of alternatives before the start of system development tend to experience poorer outcomes than programs that conduct more robust analyses. Without a justified mission need to ground acquisition decision making or a systematic analysis of the cost-benefits and risk, DHS has pursued goals (such as the time threshold of 6 hours) and specific technological requirements (such as the sensitivity threshold) that may or may not support optimal solutions. Reevaluating the mission need and systematically analyzing alternatives based on cost-benefit and risk information could help DHS gain assurance that it is pursuing an optimal solution. Furthermore, difficulty attaining the original goals has contributed to challenges in meeting milestones and deadlines for deployment. In 2009, when the Acquisition Decision Authority approved the Gen-3 acquisition, it was anticipated that Gen-3 technologies would be in initial deployment by 2013 and fully deployed by the first quarter of 2016. In 2011, DHS’s most recent estimate, which contains significant uncertainty because of testing limitations, among other reasons, was that full deployment would be 6 years later, in 2022. Similarly, the $2.1 billion cost estimate presented to DHS decision makers and Congress for planning purposes at the start of the acquisition is now $5.8 billion (for the first 13 years of deployment; only 6 of which are for full deployment) and may still rise because of lingering uncertainty about the acquisition strategy. These changes in cost, schedule, and performance, along with maturation in the department’s business processes—including acquisitions and risk management—reinforce the importance and provide an opportunity for DHS to reevaluate the mission need and alternatives in a more robust, considered, and systemic fashion, as called for in the Acquisition Life- cycle Framework, to help ensure that it makes the most sound investments possible. In addition, comprehensive and systematically developed information, developed using good practices for cost and schedule estimating like those described in the GAO Cost Estimating Guide, could help ensure that the department and policymakers have the most reliable performance, schedule, and cost information available for decision making. To help ensure that Gen-3’s public health and risk mitigation benefits justify the costs, the program pursues an optimal solution, and DHS bases its acquisition decisions on reliable performance, cost, and schedule information developed in accordance with guidance and good practices, we recommend that before continuing the Gen-3 acquisition, the Secretary of Homeland Security ensure that program and acquisition decision makers take the following two actions: 1. reevaluate the mission need and systematically analyze alternatives based on cost-benefit and risk information, using information from studies like those conducted by the Homeland Security Institute and Sandia National Laboratories, along with any other risk and cost information that may need to be developed, and 2. update other acquisition documents, such as the Acquisition Program Baseline and the Operational Requirements Document, to reflect any changes to performance, cost, and schedule information that result from the reevaluation of mission needs and alternatives. We provided a draft of this report to DHS for comment, and DHS provided written comments on the draft report, which are reproduced in full in appendix III. DHS also provided technical comments, which we incorporated as appropriate. DHS concurred with both recommendations, but did not concur that these actions need to be completed before continuing with the acquisition. With respect to the first recommendation to reevaluate the mission need and alternatives, DHS agreed that further evaluation of the mission need and alternatives is necessary. DHS stated that, on August 16, 2012, it directed the BioWatch program to complete an updated Analysis of Alternatives and Concept of Operations, which, according to a DHS official must be completed before ADE-2B, but DHS did not specify how it plans to reevaluate the mission need. With respect to the second recommendation to update other acquisition documents to reflect any performance, cost, and schedule information that might result from reevaluation, DHS acknowledged that it may be necessary and appropriate to do so. However, DHS did not agree that it should implement these two recommendations before continuing the acquisition. In its response, DHS stated its intent to issue a solicitation for performance testing concurrent with the efforts to implement the recommendations. DHS stated that BioWatch will be required to return to the Acquisition Review Board prior to issuing a contract stemming from this solicitation. We are pleased that DHS plans to reevaluate the mission need and alternatives and that the department believes this action would be beneficial as it seeks to reduce programmatic risk and demonstrate sound fiscal stewardship in an increasingly constrained fiscal environment. Additionally, we commend DHS’s stated commitment to use Acquisition Management Directive 102-01 to ensure consistent and efficient acquisition management, support, review, and approval. The directive’s acquisition life-cycle framework is designed to establish a foundation based on critical examination of the capability gap an acquisition would fill and to build sequentially on that foundation to support solid, knowledge- based acquisition decision making. To satisfy the larger purpose of the framework—providing assurance that DHS makes judicious decisions about how to invest limited resources and implements them effectively—it is vital that it be used consistently, that each acquisition adheres to the framework throughout its entire life-cycle, and that specified steps are completed in a sequential manner to support key acquisition decisions. Accordingly, we are concerned by DHS’s intention to continue the acquisition efforts before ensuring that it has fully developed the critical knowledge a comprehensive acquisition life-cycle framework effort is designed to provide. Our work showed that DHS does not have reasonable assurance that the solution it has been pursuing warrants investment of limited resources and that it represents an optimal solution. We believe it is possible that an earnest effort to reconsider the Gen-3 mission needs and alternatives would result in a different plan and course of action than the current effort. DHS stated in its response that it has directed the BioWatch program to complete an updated Analysis of Alternatives, but it remained silent on what actions, if any, it will take to reevaluate the mission need. As such, it is not clear from DHS’s response to what extent it intends to engage in a fresh reevaluation of the mission need in the broader context of DHS’s biodefense and related responsibilities before it undertakes efforts to update its Analysis of Alternatives. During discussions with program officials about the recommendation to reevaluate the mission need, the officials told us that they had resubmitted the original Mission Needs Statement to DHS for review. If DHS were to approve the original Mission Needs Statement and use it to guide the reevaluation of alternatives, it would overlook the intent of the recommendation. The intent is that DHS reevaluate existing capability gaps through the mission needs process to provide a foundation for future acquisition decision making—including the Analysis of Alternatives—that is grounded in better understanding and consensus about how filling these gaps will contribute to larger biodefense needs. Moreover, DHS’s plans to pursue testing of the Gen-3 solution—a solution which has driven DHS’s efforts for a number of years, including prior efforts to define mission need and analyze alternatives— even while agreeing to reconsider whether it is an appropriate course of action. This plan raises questions about whether the department plans to systematically and objectively reevaluate the mission need and alternatives for fulfilling that need. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 3 days from the report date. At that time, we will send copies to the Secretary of Homeland Security and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8757 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Department of Homeland Security (DHS) Acquisition Management Directive (AMD) 102-01—intended to ensure consistent and efficient acquisition management, support, review, and approval throughout the department—outlines the overall policy and structure for acquisition management at DHS. Specifically, this directive includes document and process requirements for each of the four phases of the department’s Acquisition Life-cycle Framework through which DHS determines whether it is sensible to proceed with a proposed acquisition. DHS formally approved the Gen-3 acquisition at Acquisition Decision Event (ADE) 2A in October 2009 without fully using and completing the processes and analyses in Phases 1 and 2 of the Acquisition Life-cycle Framework: (1) identify a capability need and (2) analyze and select the means to provide that capability. However, as shown in table 5, the documentation set DHS had available to inform the ADE-2A decision was incomplete and certain required information was not developed using the most reliable methods available. From May 2010 to June 2011, the BioWatch program completed a series of characterization tests on a Gen-3 candidate technology. The goals of this testing included characterizing the state of the market and evaluating the candidate systems’ abilities to meet performance requirements developed by the BioWatch program. As described in table 6, DHS completed four independent laboratory tests and a field test as part of characterization testing. Based on the results of the test events described above, an independent assessment was completed to evaluate the performance of the candidate Gen-3 system tested against the requirements developed by the BioWatch program. These requirements were listed in the Gen-3 operational requirements document, approved at ADE-2A in 2009, and included five key performance parameters (KPP)—the most important and non-negotiable requirements that must be met in order for the program to fulfill its purpose. As shown in table 7, the summary report found that of the five KPPs, the candidate system that completed testing met or partially met three, did not meet one, and that performance against the final KPP remained unresolved. In addition to the contact named above, Edward George, Assistant Director; Kathryn Godfrey; Allyson Goldstein; and Katy Trenholme made significant contributions to the work. Harold Brumm, Nirmal Chaudhary, Michelle Cooper, Marcia Crosse, Katherine Davis, Amanda Gill, Eric Hauswirth, Tracey King, Susanna Kuebler, David Lysy, Amanda Miller, Jan Montgomery, Jessica Orr, Katherine Trimble, and Teresa Tucker also provided support.
The 2001 anthrax attacks brought attention to the potentially devastating consequences of a biological attack. DHS operates a program, known as BioWatch, intended to help detect such an attack by airborne pathogens. The currently deployed technology can take 12 to 36 hours to confirm the presence of pathogens. DHS has been pursuing a third generation of the technology that will perform automated testing, potentially generating a result in under 6 hours and reducing labor costs. GAO was asked to examine issues related to the Gen-3 acquisition. This report addresses the extent to which (1) DHS used its acquisition life cycle framework to justify the need and consider alternatives; (2) DHS developed reliable performance, schedule, and cost expectations; and (3) steps remaining before Gen-3 can be deployed. GAO reviewed acquisition documentation and test results and interviewed agency officials from the BioWatch program and other DHS components with development, policy, and acquisition responsibilities. The Department of Homeland Security (DHS) approved the Generation-3 (Gen-3) acquisition in October 2009, but it did not fully engage in the early phases of its acquisition framework to ensure that the acquisition was grounded in a justified mission need and that it pursued an optimal solution. Critical processes in the early phases of DHS's framework are designed to (1) justify a mission need that warrants investment of resources and (2) select an optimal solution by evaluating viable alternatives based on risk, costs, and benefits. BioWatch program officials said that these early acquisition efforts were less comprehensive and systematic than the DHS framework calls for because there was already departmental consensus around the solution. Without a systematic effort to justify the need for the acquisition in the context of its costs, benefits, and risks, DHS has pursued goals and requirements for Gen-3 with limited assurance that they represent an optimal solution. Reevaluating the mission need and systematically analyzing alternatives could provide better assurance of an optimal solution. The performance, schedule, and cost expectations presented in required documents when DHS approved the acquisition were not developed in accordance with DHS guidance and good acquisition practices--like accounting for risk in schedule and cost estimates. BioWatch program officials said that DHS leadership directed them to prepare information quickly for the 2009 decision, which was accelerated by more than 1 year. Since DHS approved the acquisition in October 2009, the estimated date for full deployment has been delayed from fiscal year 2016 to fiscal year 2022, and the original life cycle cost estimate for the 2009 decision--a point estimate unadjusted for risk--was $2.1 billion. In June 2011, DHS provided a risk-adjusted estimate at the 80 percent confidence level of $5.8 billion. Comprehensive and systematic information developed using good practices for cost and schedule estimating, could help ensure more reliable performance, schedule, and cost information for decision makers. Several steps remain before DHS can deploy and operate Gen-3. First, DHS must conduct additional performance and operational testing. This testing--estimated to take 3 years and cost $89 million--is intended to demonstrate full system performance, including the information technology network. To do so, the BioWatch program must address testing challenges including limitations on the use of live pathogens, among others. Following operational testing, DHS intends to decide whether to authorize the production and deployment of Gen-3. If Gen-3 is approved, the BioWatch program plans to prepare for deployment by working with BioWatch jurisdictions to develop location-specific plans to guide Gen-3 operations. DHS estimates show that about $5.7 billion of the $5.8 billion life-cycle cost remains to be spent to test, produce, deploy, and operate Gen-3 through fiscal year 2028. GAO recommends that before continuing the acquisition, DHS reevaluate the mission need and alternatives and develop performance, schedule, and cost information in accordance with guidance and good acquisition practices. DHS concurred with the recommendations, but not the implementation timeline. DHS plans to proceed with the acquisition while implementing them to avoid further delays. However, GAO believes the recommendations should be enacted before DHS proceeds with the acquisition as discussed in this report.
The purpose of the HUBZone program, which was established by the HUBZone Act of 1997, is to stimulate economic development, through increased employment and capital investment, by providing federal contracting preferences to small businesses in economically distressed communities or HUBZone areas. The types of areas in which HUBZones may be located are defined by law and consist of the following: Qualified census tracts. A qualified census tract has the meaning given the term by Congress for the low-income-housing tax credit program. The list of qualified census tracts is maintained and updated by the Department of Housing and Urban Development (HUD). As currently defined, qualified census tracts have either 50 percent or more of their households with incomes below 60 percent of the area median gross income or have a poverty rate of at least 25 percent. The population of all census tracts that satisfy one or both of these criteria cannot exceed 20 percent of the area population. Qualified nonmetropolitan counties. Qualified nonmetropolitan counties are those that, based on decennial census data, are not located in a metropolitan statistical area and in which 1. the median household income is less than 80 percent of the nonmetropolitan state median household income; 2. the unemployment rate is not less than 140 percent of the average unemployment rate for either the nation or the state (whichever is lower); or 3. a difficult development area is located. Qualified Indian reservations. A HUBZone qualified Indian reservation has the same meaning as the term “Indian Country” as defined in another federal statute, with some exceptions. These are all lands within the limits of any Indian reservation, all dependent Indian communities within U.S. borders, and all Indian allotments. In addition, portions of the State of Oklahoma qualify because they meet the Internal Revenue Service’s definition of “former Indian reservations in Oklahoma.” Redesignated areas. These are census tracts or nonmetropolitan counties that no longer meet the economic criteria but remain eligible until after the release of the 2010 decennial census data. Base closure areas. Areas within the external boundaries of former military bases that were closed by the Base Realignment and Closure Act (BRAC) qualify for HUBZone status for a 5-year period from the date of formal closure. In order for a firm to be certified to participate in the HUBZone program, it must meet the following four criteria: the company must be small by SBA size standards; the company must be at least 51 percent owned and controlled by U.S. citizens; the company’s principal office—the location where the greatest number of employees perform their work—must be located in a HUBZone; and at least 35 percent of the company’s full-time (or full-time equivalent) employees must reside in a HUBZone. As of February 2008, 12,986 certified firms participated in the HUBZone program. More than 4,200 HUBZone firms obtained approximately $8.1 billion in federal contracts in fiscal year 2007. The annual federal contracting goal for HUBZone small businesses is 3 percent of all prime contract awards—contracts that are awarded directly by an agency. Our June 2008 report found that a series of statutory changes have resulted in an increase in the number and types of HUBZone areas. These changes could diffuse (or limit) the economic benefits of the program. Further, while SBA relies on federal law to identify qualified HUBZone areas, its HUBZone map is inaccurate. In recent years, amendments to the HUBZone Act and other statutes have increased the number and types of HUBZone areas. The original HUBZone Act of 1997 defined a HUBZone as any area within a qualified census tract, a qualified nonmetropolitan county, or lands within the boundaries of a federally recognized Indian reservation. However, subsequent legislation revised the definitions of the original categories and expanded the HUBZone definition to include new types of qualified areas (see fig. 1). Subsequent to the various statutory changes, the number of HUBZone areas grew from 7,895 in calendar year 1999 to 14,364 in 2006. SBA’s data show that, as of 2006, there were 12,218 qualified census tracts; 1,301 nonmetropolitan counties; 651 Indian Country areas; 82 BRAC areas; and 112 difficult development areas. In expanding the types of HUBZone areas, the definition of economic distress has been broadened to include measures that were not in place in the initial statute. For example, a 2000 statute amended the HUBZone area definition to allow census tracts or nonmetroplitan counties that ceased to be qualified to remain qualified for a further 3-year period as “redesignated areas.” A 2004 statute permitted these same areas to remain qualified until the release date of the 2010 census data. Further, in 2005, Congress expanded the definition of a qualified nonmetropolitan county to include difficult development areas outside the continental U.S.—areas with high construction, land, and utility costs relative to area income—and such counties could include areas not normally considered economically distressed. As a result, the expanded HUBZone criteria now allow for HUBZone areas that are less economically distressed than the areas initially designated. HUBZone program officials stated that the expansion can diffuse the impact or potential impact of the program on existing HUBZone areas. We recognize that establishing new HUBZone areas can potentially provide economic benefits for these areas by helping them attract firms that make investments and employ HUBZone residents. However, such an expansion could result in less targeting of areas of greatest economic distress. SBA program staff employ no discretion in identifying HUBZone areas because they are defined by federal statute; however, they have not always designated these areas correctly on their Web map. To identify and map HUBZone areas, SBA relies on a mapping contractor and data from other executive agencies (see fig. 2). Essentially, the map is SBA’s primary interface with small businesses to determine if they are located in a HUBZone and can apply for HUBZone certification. During the course of our review, we identified two problems with SBA’s HUBZone map. First, the map includes some areas that do not meet the statutory definition of a HUBZone area. As noted previously, counties containing difficult development areas are only eligible in their entirety for the HUBZone program if they are not located in a metropolitan statistical area. However, we found that SBA’s HUBZone map includes 50 metropolitan counties as difficult development areas that do not meet this or any other criterion for inclusion as a HUBZone area. As a result of these errors, ineligible firms have obtained HUBZone certification and received federal contracts. As of December 2007, 344 certified HUBZone firms were located in ineligible areas in these 50 counties. Further, from October 2006 through March 2008, federal agencies obligated about $5 million through HUBZone set-aside contracts to 12 firms located in these ineligible areas. Second, while SBA’s policy is to have its contractor update the HUBZone map as needed, the map has not been updated since August 2006. Since that time, additional data such as unemployment rates from the Bureau of Labor Statistics (BLS) have become available. Although SBA officials told us that they have been working to have the contractor update the mapping system, no subcontract was in place as of May 2008. While an analysis of the 2008 list of qualified census tracts showed that the number of tracts had not changed since the map was last updated, our analysis of 2007 BLS unemployment data indicated that 27 additional nonmetropolitan counties should have been identified on the map, allowing qualified firms in these areas to participate in the program. Because firms are not likely to receive information on the HUBZone status of areas from other sources, firms in the 27 areas would have believed from the map that they were ineligible to participate in the program and could not benefit from contracting incentives that certification provides. In our June 2008 report, we recommended that SBA take immediate steps to correct and update the map and implement procedures to ensure that it is updated with the most recently available data on a more frequent basis. In response to our recommendation, SBA indicated that it plans to issue a new contract to administer the HUBZone map and anticipates that the maps will be updated and available no later than August 29, 2008. Further, SBA stated that, during the process of issuing the new contract, the HUBZone program would issue new internal procedures to ensure that the map is updated continually. Our June 2008 report also found that the policies and procedures upon which SBA relies to certify and monitor firms provide limited assurance that only eligible firms participate in the HUBZone program. While internal control standards for federal agencies state that agencies should document and verify information that they collect on their programs, SBA obtains supporting documentation from firms in limited instances. In addition, SBA does not follow its own policy of recertifying all firms every 3 years, and has not met its informal goal of 60 days for removing firms deemed ineligible from its list of certified firms. Firms apply for HUBZone certification using an online application system, which according to HUBZone program officials employs automated logic steps to screen out ineligible firms based on the information entered on the application. For example, firms enter information such as their total number of employees and number of employees that reside in a HUBZone. Based on this information, the system then calculates whether the number of employees residing in a HUBZone equals 35 percent or more of total employees, the required level for HUBZone eligibility. HUBZone program staff then review the applications to determine if more information is required. While SBA’s policy states that supporting documentation normally is not required, it notes that agency staff may request and consider such documentation, as necessary. No specific guidance or criteria are provided to program staff for this purpose; rather, the policy allows staff to determine what circumstances warrant a request for supporting documentation. In determining whether additional information is required, HUBZone program officials stated that they generally consult sources such as firms’ or state governments’ Web sites that contain information on firms incorporated in the state. SBA ultimately approves the majority of applications submitted. For example, in fiscal year 2007, SBA approved about 78 percent of the applications submitted. To ensure the continued eligibility of certified HUBZone firms, SBA requires firms to resubmit an application. That is, to be recertified, firms re-enter information in the online application system, and HUBZone program officials review it. In 2004, SBA changed the recertification period from an annual recertification to every 3 years. According to HUBZone program officials, they generally limit their reviews to comparing resubmitted information to the original application. The officials added that significant changes from the initial application can trigger a request for additional information or documentation. If concerns about eligibility are raised during the recertification process, SBA will propose decertification or removal from the list of eligible HUBZone firms. Firms that are proposed for decertification can challenge that proposed outcome through a due-process mechanism. SBA ultimately decertifies firms that do not challenge the proposed decertification and those that cannot provide additional evidence that they continue to meet the eligibility requirements. For example, SBA began 6,798 recertifications in fiscal years 2005, 2006, and 2007 and either had proposed to decertify or completed decertification of 5,201 of the firms (about 77 percent) as of January 22, 2008 (the date of the data set). Although SBA does not systematically track the reasons why firms are decertified, HUBZone program officials noted that many firms do not respond to SBA’s request for updated information. Internal control standards for federal agencies and programs require that agencies collect and maintain documentation and verify information to support their programs. However, SBA verifies the information it receives from firms in limited instances. For example, our review of the 125 applications that were submitted in September 2007 shows that HUBZone program staff requested additional information but not supporting documentation for 10 (8 percent) of the applications; requested supporting documentation for 45 (36 percent) of the applications; and conducted one site visit. According to HUBZone program officials, they did not more routinely verify the information because they generally relied on their automated processes and status protest process. For instance, they said they did not request documentation to support each firm’s application because the application system employs automated logic steps to screen out ineligible firms. For example, the application system calculates the percentage of a firm’s employees that reside in a HUBZone and screens out firms that do not meet the 35 percent requirement. But the automated application system would not necessarily screen out applicants that submit false information to obtain a HUBZone certification. Rather than obtaining supporting documentation during certification and recertification on a more regular basis, SBA waits until it conducts program examinations of a small percentage of firms to consistently request supporting documentation. Since fiscal year 2004, SBA’s policy has been to conduct program examinations on 5 percent of firms each year. From fiscal years 2004 through 2006, nearly two-thirds of firms SBA examined were decertified, and in fiscal year 2007, 430 of 715 firms (about 60 percent) were decertified or proposed for decertification. The number of firms decertified includes firms that the agency determined were ineligible and were decertified, and firms that requested to be decertified. Because SBA limits its program examinations to 5 percent of firms each year, firms can be in the program for years without being examined. For example, we found that 2,637 of the 3,348 firms (approximately 79 percent) that had been in the program for 6 years or more had not been examined. In addition to performing program examinations on a limited number of firms, HUBZone program officials rarely conduct site visits during program examinations to verify a firm’s information. In our report, we recommended that SBA develop and implement guidance to more routinely and consistently obtain supporting documentation upon application and conduct more frequent site visits, as appropriate, to ensure that firms applying for certification are eligible. In response to this recommendation, SBA stated it was formulating procedures that would provide sharper guidance about when supporting documentation and site visits would be required, and plans to identify potential areas of concern during certification that would mandate additional documentation and site visits. As noted previously, since 2004 SBA’s policies have required the agency to recertify all HUBZone firms every 3 years. Recertification presents another opportunity for SBA to review information from firms and thus help monitor program activity. However, SBA has failed to recertify 4,655 of the 11,370 firms (more than 40 percent) that have been in the program for more than 3 years. Of the 4,655 firms that should have been recertified, 689 have been in the program for more than 6 years. According to HUBZone program officials, the agency lacked sufficient staff to complete the recertifications. However, the agency hired a contractor in December 2007 to help conduct recertifications, using the same process that SBA staff currently use. Although SBA has acquired these additional resources, the agency lacks specific timeframes for eliminating the backlog. As a result of the backlog, the periods during which some firms go unmonitored and reviewed for eligibility are longer than SBA policy allows, increasing the risk that ineligible firms may be participating in the program. In our recent report, we recommended that SBA establish a specific time frame for eliminating the backlog of recertifications and take the necessary steps to ensure that recertifications are completed in a more timely fashion in the future. In its response to this recommendation, SBA noted that the HUBZone program had obtained additional staff and that the backlog of pending recertifications would be completed by September 30, 2008. Further, to ensure that recertifications will be handled in a more timely manner, SBA stated that the HUBZone program has made dedicated staffing changes and will issue explicit changes to procedures. While SBA policies for the HUBZone program include procedures for certifications, recertifications, and program examinations, they do not specify a timeframe for processing decertifications—the determinations subsequent to recertification reviews or examinations that firms are no longer eligible to participate in the HUBZone program. Although SBA does not have written guidance for the decertification timeframe, the HUBZone program office negotiated an informal (unwritten) goal of 60 days with the SBA Inspector General (IG) in 2006. In recent years, SBA ultimately decertified the vast majority of firms proposed for decertification, but has not met its 60-day goal consistently (see table 1). From fiscal years 2004 through 2007, SBA failed to resolve proposed decertifications within its goal of 60 days for more than 3,200 firms. While SBA’s timeliness has improved, in 2007, more than 400 (or about 33 percent) were not resolved in a timely manner. As a consequence of generally not meeting its 60-day goal, lags in the processing of decertifications have increased the risk of ineligible firms participating in the program. In our June 2008 report, we recommended that SBA formalize and adhere to a specific time frame for processing firms proposed for decertification in the future. In response, SBA noted that it would issue new procedures to clarify and formalize the decertification process and its timelines. SBA stated that the new decertification procedures would establish a 60 calendar day deadline to complete any proposed decertification. Our June 2008 report also found that SBA has taken limited steps to assess the effectiveness of the HUBZone program. SBA’s three performance measures for the HUBZone program do not directly measure the effect of the program on communities. Moreover, federal agencies did not meet the government-wide contracting goal for the HUBZone program in fiscal years 2003 through 2006 (the most recent years for which goaling data are available). While SBA has some measures in place to assess the performance of the HUBZone program, the agency has not implemented its plans to conduct an evaluation of the program’s benefits. According to the Government Performance and Results Act of 1993, federal agencies are required to identify results-oriented goals and measure performance toward the achievement of their goals. We previously have reported on the attributes of effective performance measures, and reported that for performance measures to be useful in assessing program performance, they should be linked or aligned with program goals and cover the activities that an entity is expected to perform to support the intent of the program. According to SBA’s fiscal year 2007 Annual Performance Report, the three performance measures for the HUBZone program were: (1) the number of small businesses assisted (which SBA defines as the number of applications approved and the number of recertifications processed), (2) the annual value of federal contracts awarded to HUBZone firms, and (3) the number of program examinations completed. These measures provide some data on program activity and measure contract dollars awarded to HUBZone firms. However, they do not directly measure the program’s effect on firms (such as growth in employment or changes in capital investment) or directly measure the program’s effect on the communities in which the firms are located (for instance, changes in median household income or poverty levels). Similarly, the Office of Management and Budget (OMB) noted in its 2005 Program Assessment Rating Tool (PART) that SBA needed to develop baseline measures for some of its HUBZone performance measures and encouraged SBA to focus on more outcome-oriented measures that better evaluate the results of the program. The PART assessment also documented plans that SBA had to conduct an analysis of the economic impact of the HUBZone program on a community-by-community basis using data from the 2000 and 2010 decennial census. However, SBA officials indicated that the agency has not devoted resources to implement either of these strategies for assessing the results of the program. Yet by not evaluating the HUBZone program’s benefits, SBA lacks key information that could help it better manage the program and inform the Congress of its results. As part of our work, we conducted site visits to four HUBZone areas (Lawton, Oklahoma; Lowndes County, Georgia; and Long Beach and Los Angeles, California) to better understand to what extent stakeholders perceived that the HUBZone program generated benefits. For all four HUBZone areas, the perceived benefits of the program varied, with some firms indicating they have been able to win contracts and expand their firms and others indicating they had not realized any benefits from the program. Officials representing economic development entities varied in their knowledge of the program, with some stating they lacked information on the program’s effect that could help them inform small businesses of its potential benefits. In our report, we recommended that SBA further develop measures and implement plans to assess the effectiveness of the HUBZone program. In its response to this recommendation, SBA stated that it would develop an assessment tool to measure the economic benefits that accrue to areas in the HUBZone program and that the HUBZone program would then issue periodic reports accompanied by the underlying data. Although contracting dollars awarded to HUBZone firms have increased since fiscal year 2003—when the statutory goal of awarding 3 percent of federally funded contract dollars to HUBZone firms went into effect— federal agencies collectively still have not met that goal. According to data from SBA’s goaling reports, for the four fiscal years from 2003 through 2006, the percentage of prime contracting dollars awarded to HUBZone firms increased, with the total for fiscal year 2006 at just above 2 percent (see table 2). In fiscal year 2006, 8 of 24 federal agencies met their HUBZone goals. Of the 8 agencies, 4 had goals higher than the 3 percent requirement and were able to meet the higher goals. Of the 16 agencies not meeting their HUBZone goal, 10 awarded less than 2 percent of their small business- eligible contracting dollars to HUBZone firms. Madam Chairwoman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or shearw@gao.gov. Individuals making key contributions to this testimony included Paige Smith (Assistant Director), Triana Bash, Tania Calhoun, Bruce Causseaux, Alison Gerry, Cindy Gilbert, Julia Kennon, Terence Lam, Tarek Mahmassani, John Mingus, Marc Molino, Barbara Roesmann, and Bill Woods. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Small Business Administration's (SBA) Historically Underutilized Business Zone (HUBZone) program provides federal contracting assistance to small firms located in economically distressed areas, with the intent of stimulating economic development. Questions have been raised about whether the program is targeting the locations and businesses that Congress intended to assist. This testimony focuses on (1) the criteria and process that SBA uses to identify and map HUBZone areas; (2) the mechanisms SBA uses to ensure that only eligible small businesses participate in the program; and (3) the actions SBA has taken to assess the results of the program and the extent to which federal agencies have met HUBZone contracting goals. To address these objectives, GAO analyzed statutory provisions as well as SBA, Census, and contracting data and interviewed SBA and other federal and local officials. SBA relies on federal law to identify qualified HUBZone areas, and recent statutory changes have resulted in an increase in the number and types of HUBZone areas--changes that could diffuse the economic benefits of the program. Further, the map that SBA uses to help firms interested in participating in the program to determine if they are located in a HUBZone area is inaccurate. Specifically, the map incorrectly includes 50 metropolitan counties and excludes 27 nonmetropolitan counties. As a result, ineligible small businesses participated in the program, and eligible businesses have not been able to participate. The mechanisms that SBA uses to certify and monitor firms provide limited assurance that only eligible firms participate in the program. Although internal control standards state that agencies should verify information they collect, SBA verifies the information reported by firms on their application or during recertification--its process for monitoring firms--in limited instances and does not follow its own policy of recertifying all firms every 3 years. GAO found that more than 4,600 firms that had been in the program for at least 3 years went unmonitored. Further, SBA lacks a formal policy on how quickly it needs to make a final determination on decertifying firms that may no longer be eligible for the program. Of the more than 3,600 firms proposed for decertification in fiscal years 2006 and 2007, more than 1,400 were not processed within 60 days--SBA's unwritten target. As a result of these weaknesses, there is an increased risk that ineligible firms have participated in the program and had opportunities to receive federal contracts. SBA has taken limited steps to assess the effectiveness of the HUBZone program, and from 2003 to 2006 federal agencies did not meet the government-wide contracting goal for the HUBZone program. Federal agencies are required to identify results-oriented goals and measure performance toward the achievement of their goals. SBA tracks the number of firms certified or recertified, the annual value of contracts awarded to HUBZone firms, and the number of program examinations completed annually, but has not devoted resources to completing an evaluation of the program. Consequently, SBA lacks key information that could help it better manage and assess the results of the program. Finally, most federal agencies did not meet their HUBZone contracting goals during fiscal year 2006, the most recent year for which we had data. While the percentage of prime contracting dollars awarded to HUBZone firms increased in each fiscal year from 2003 to 2006, the 2006 awards fell short of the government-wide 3 percent goal by about one-third.
DOD contracting officers use a structured approach called the Weighted Guidelines Method to develop profit objectives for use in contract negotiations. Using these profit guidelines, contracting officers address a contractor’s (1) risk in fulfilling the contract requirements, known as performance risk; (2) degree of cost risk because of the type of contract (e.g., fixed-price versus cost contract); and (3) investment in facilities that benefit DOD. Prior to the profit policy change, the performance risk factor consisted of three elements: Technical—the technical uncertainties of performance. Management—the degree of management effort necessary to ensure that contract requirements are met. Cost control—the contractor’s efforts to reduce and control costs. In the National Defense Authorization Act for Fiscal Year 2000, Congress included provisions to stimulate technical innovation in military research and development. Section 813 required DOD to review its profit guidelines to consider whether modifications to the guidelines—such as placing increased emphasis on technical risk as a factor for determining appropriate profit margins—would provide an incentive for contractors to develop and produce complex and innovative new technologies. After completing its review, DOD reported to Congress that it planned to make two changes to the guidelines. As shown in table 1, the first change was to increase the weight contracting officers would likely assign to technical risk by reducing performance risk from three to two elements. The second was to add a special incentive for contractors that propose significant technological innovation. Using the technology incentive, contracting officers can assign a profit range of 6 to10 percent for the technical element instead of the standard range of 2 to 6 percent. On December 13, 2000, DOD published a final rule in the Federal Register to implement the two changes. (Appendix I presents an example of the application of the incentive to a contract.) During the same time period, DOD sought to realize the benefits of best commercial practices by revising its policies that guide the system acquisition process. The result was a new system acquisition life cyclethat separates technology development and system development. The technology development phase generally begins with paper studies of alternative technology concepts for meeting a mission (concept exploration) and ends with the demonstration of component technology in a relevant environment to reduce the risk of integrating the components or subsystems into a full system (component advanced development). A program is usually initiated at the beginning of system development, at which point the system’s technology should be mature. During system development and demonstration, the subsystems and components are integrated into the system, the design is stabilized, and the system is demonstrated in a realistic environment. The system then enters low-rate initial production, during which the manufacturing capability is established. By the time the system reaches full-rate production, the technology should be mature, the design stable, and the manufacturing processes established. The profit guidelines containing the technology incentive do not apply to most research and development contracts and, therefore, the incentive has limited reach in the phases of DOD’s acquisition cycle where technology innovation is expected to be high. Many contracts awarded in these high innovation phases have technical reports as contract deliverables, and these are appropriately excluded from the incentive. The profit guidelines containing the incentive do not apply to contracts awarded with competition, which is commonly the case for research and development contracts. Also, contracting officers already have available another mechanism–award fees–to reward innovation. Table 2 shows the expected level of innovation and the typical contract deliverable for each phase of DOD’s system acquisition cycle. It also shows what type of profit (fixed/incentive fee versus award fee) is used to reward contractors during each phase—the profit guidelines only apply to fixed/incentive fee contracts. For fixed/incentive fee contracts, the percentage and dollar value of awards made without competition is shown—the profit guidelines only apply to these awards. The table shows that the guidelines (and therefore the technology incentive) do not apply to many contracts in early research and development where innovation is a priority. The profit policy excludes many contracts awarded during technology development (concept exploration and component advanced development). Concept exploration commonly consists of paper studies of alternative concepts for meeting a mission and, therefore, contracts generally have a technical report as their primary deliverable. The technology incentive range does not apply to efforts restricted to studies, analyses, or demonstrations that have a technical report as their primary contract deliverable. Technical reports were excluded from coverage because these efforts do not involve the risk inherent in producing and fielding weapon system hardware. Technology development contracts are typically awarded through competition. DOD’s profit guidelines do not apply to competitively awarded contracts because price reasonableness is established through price competition rather than through use of the guidelines. Contracting officers have available another contracting mechanism— award fees—to reward innovation in research and development. Award fees are used to motivate contractor performance in those areas critical to program success, such as technical, logistics support, cost, and schedule. Contracting officers can use the award fee to encourage contractors to develop innovative new technologies by including these objectives in the criteria for evaluating how much of the award fee the contractor has earned. The definition contained in the policy guidelines of what qualifies for the technology incentive is so broad that it could be applied to almost any contract with enhanced system performance. Our discussions with contracting officials indicate that there is confusion over how and for how long the incentive should be applied. This confusion may lead to inconsistent and possibly inappropriate application of the incentive and could result in contractors being paid more profit for their current level of innovation, not for the intended new technological innovations that significantly enhance performance, improve reliability, or reduce costs. The rule states that contracting officers may use the technology incentive range when a contractor proposes to develop, produce, or apply innovative new technologies during contract performance. It further states that contracting officers are to use the incentive only for the most innovative efforts. The rule defines innovation as “Development or application of new technology that fundamentally changes the characteristics of an existing product or system and that results in increased technical performance, improved reliability, or reduced costs; or New products or systems that contain significant technological advances over the products or systems they are replacing.” Although the rule describes in broad terms when the application of the incentive is appropriate, it leaves many questions unanswered in defining key terms. For example, how “new” must a technology be to qualify? Does “new” mean it is just out of the laboratory and has never been used before on any system, or does it refer to a recently developed technology that has been used on other products but not on the product in question? Should the incentive apply to demonstrated technology or to the promise to develop technology? And if a contractor is awarded additional profit for developing, producing, or applying new innovative technology, when should the reward stop? Should it apply only to the immediate contract, or should a contractor receive the reward throughout some portion of production contracts? By the same token, what measures are to be applied in determining whether a technological advance is “significant” or whether new technology “fundamentally changes” a product or system? Without this information, the rule could be interpreted so that the incentive could apply to almost any program with more demanding performance characteristics than the system being replaced. Although, at the time of our review, the new rule had not been widely used, we discussed with agency officials the circumstances under which they might apply the technology incentive. Air Force, Army, and Navy officials agreed that the technology incentive could apply to both research and development and production contracts, but they did not interpret the rule’s guidance on when to apply the incentive in the same manner. For example, officials at two Air Force program offices judged that upgrades to their systems that included state-of–the-art technology used on other products would not qualify for the technology incentive, but those at an Army office said that similar applications of state-of-the-art technology to their system would qualify. In fact, contracting officials at two Army program offices told us that all weapon systems at their buying command incorporated state-of-the-art, leading edge technology and would, therefore, qualify for the incentive. On the other hand, officials for one Air Force system did not believe a future upgrade to their system that may incorporate brand-new technology developed by another military service would qualify for the incentive because the other service would have developed the new technology. Finally, the contracting officer for a Navy system that incorporated brand-new, never-before-used, technology that allowed the system to exceed performance requirements stated that the system would qualify for the incentive. These examples point to potential confusion over how the rule’s broad definition of technological innovation should be applied. The officials were also uncertain about how long a contractor should be rewarded with the technology incentive for significant, new innovative technology introduced in the research and development phases of the acquisition process. For example, a procurement official for an Army system currently in the latter stages of research and development stated that the system may qualify for the incentive during production, depending on how the language in the rule is interpreted. A technical official for this system at first stated that, hopefully, innovation and risk would be finished before the system enters production, and therefore, the system would not qualify for the incentive at that point. But, after reading the language in the rule (“New products or systems that contain significant technological advances over the products or systems they are replacing”), he said the system may qualify after all. Also, technical officials for the Navy system discussed previously did not disagree with the contracting officer that the system, with new technology that enhanced performance, would qualify for the incentive. However, they stated that, in general, the technology incentive should be awarded during research and development because, by low-rate production, the technology should be set, and, during production, the emphasis should be on making manufacturing processes more efficient and reducing costs. The new profit guidelines do not identify how the incentive relates to the revised policies that guide DOD’s system acquisition process. The new acquisition process emphasizes technology maturity before committing to a program to reduce its risk, but the profit guidelines reward contractors with additional profit for introducing new technology, sending mixed signals about the relative importance of innovation and technology maturity. The new profit policy could be interpreted in such a way as to be inconsistent with the new acquisition process. In DOD’s traditional system acquisition process, program managers matured a system’s technology throughout the weapon system phases, resulting in a system that cost significantly more, took longer to produce, and delivered less than was promised. A new weapon system was encouraged to possess performance features that significantly distinguished it from other systems. Consequently, the acquisition environment led DOD program managers to promote performance features and design characteristics that relied on immature technologies. Managers were also subject to the pressures of successfully competing for the funds to start and sustain a DOD acquisition program. This encouraged managers to launch product developments with more technical unknowns and less knowledge about performance and production risks than best commercial practices dictate. These managers relied on attaining technology, design, and manufacturing knowledge concurrently–in the higher cost environment of product development–throughout the weapon system phases. In keeping with best commercial practices, DOD adopted a new system acquisition approach in which key acquisition and long-term funding commitments are discouraged until technology is mature and risks are far better understood than under the traditional process. DOD’s new system acquisition life cycle separates technology development from system development. A system’s technology should be mature and demonstrated before a program is initiated and system development begins. According to DOD Instruction 5000.2, “entrance into System Development and Demonstration is dependent on three things: technology (including software) maturity, validated requirements, and funding. Unless some other factor is overriding in its impact, the maturity of the technology will determine the path to be followed.” When the system goes into full-rate production, the technology should be mature, the design stable, and the manufacturing processes established. The technology incentive is not tied to the new acquisition cycle, and the profit policy does not address technology maturation and risk reduction, which are central to DOD’s revised acquisition policies. The revised acquisition policies stress that technology be mature and demonstrated before it is integrated into a system. But, the profit policy does not discuss when in the acquisition cycle innovative technologies should be rewarded with higher profits. Nor does the profit policy address if or when contractor efforts to mature innovations should be rewarded through use of the technology incentive. As a result, the risk is created that the two policies will work against each other rather than reinforce each other. The new profit policy may reward contractors for existing levels of innovation rather than incentivize additional innovation. The definition of innovation contained in the rule is overly broad and covers all programs that improve performance over systems that are being replaced—the very reason for having a program in the first place. Moreover, the rule is silent on several issues, including how long contractors should be rewarded for significant innovation. And the relationship of the profit policy to the acquisition process is not addressed, sending mixed signals to contractors and contracting officials as to the relative importance of technology innovation and technology maturation at different points in the acquisition cycle. To assure that the technology incentive is appropriately interpreted and applied, we recommend that the Secretary of Defense clarify the definition of innovation contained in the profit policy rule; define how long contractors should be rewarded for innovations introduced during research and development phases; and reconcile the relationship of the technology incentive with DOD’s new acquisition process, including the emphasis on technology maturation. In written comments on a draft of this report, DOD partially concurred with the first recommendation that it clarify the definition of innovation contained in the profit policy rule. DOD stated that it would examine how the policy is being used after it has been in place for a year and, at that time, determine if the types of innovation that may be rewarded with the technology incentive factor can be stated more clearly. DOD partially concurred with our second recommendation that it define how long contractors should be rewarded for innovations introduced during research and development phases. DOD stated that, after the policy has been in place for a year, it will re-examine the regulations to determine if there are relevant factors that can be provided for contracting officers to consider in making this judgment. DOD disagreed with our recommendation that it reconcile the relationship of the technology incentive to the new acquisition process. DOD stated that it did not believe the revised profit policy was inconsistent with its new 5000 series acquisition regulations. DOD pointed out that the 5000 series stresses the need for balance among several key factors in planning acquisition strategies and that, ultimately, DOD decides what it will buy and how much technological risk it will accept. According to DOD, after that decision is made, the technology incentive factor can be used to reward contractors for the technical risk they undertake in developing or applying new technologies or significant technological advances. While the 5000 series discusses several factors to be considered in planning acquisition strategies, there is a clear emphasis on technology maturity to reduce program risk as a system progresses through the acquisition process. DOD Regulation 5000.2-R identifies technology maturity as a “principal element of program risk.” DOD Instruction 5000.2 provides managers with specific guidance for managing this element of program risk and makes it clear that technology should be matured and demonstrated during the technology development phase before a program is initiated and component technology is integrated into a system. The instruction states that “unless some other factor is overriding in its impact, the maturity of the technology will determine the path to be followed.” According to the instruction, “technology must have been demonstrated in a relevant environment … to be considered mature enough to use for product development in systems integration. If technology is not mature, the DOD Component shall use alternative technology that is mature and that can meet the user’s needs.” Although the acquisition guidance emphasizes technology maturity to reduce program risk, the profit policy rewards contractors with additional profit for undertaking technical risk in developing or applying new technology at unspecified points in the acquisition cycle. Because the profit policy does not discuss when in the acquisition cycle innovative technologies should be rewarded with higher profits, it could be interpreted in such a way as to be inconsistent with the new acquisition process. We discussed this issue with the officials in the office responsible for developing the new 5000 series. These officials were familiar with the profit policy rule and, while they noted that the two were not necessarily inconsistent, the potential for misinterpretation existed. These officials said that if innovation meant new–but mature–technology, there would be no conflict between the policies. On the other hand, they noted that if “innovation” was misread for “risk taking” or “technology immaturity,” especially late in the acquisition cycle, the policies could work against each other. They added that the technology incentive would need to be carefully managed to prevent a conflict and that this could be achieved through means such as training. We continue to believe that the best approach for managing this potential conflict is to explicitly discuss the relationship between the two policies—particularly as they relate to innovation—in the guidelines contained in the profit policy regulation on when the technology incentive should be used. DOD comments appear in appendix II. To determine whether the new profit policy is likely to achieve its objective of stimulating increased innovation, we selected programs at some of DOD’s highest dollar buying commands to review how contracting and acquisition officials would apply the new policy to various programs. We selected one buying command to represent each service. We discussed the profit policy rule in general, the types of contracts the rule might apply to, and the points in DOD’s acquisition cycle in which it could be applied. We also talked specifically about each program selected to determine whether there were innovative technologies that would have qualified for the technology incentive if the policy had been in effect at the time of contract award. In addition, we asked representatives from some of the programs to reprice a sample contract using the new profit policy to determine whether the profit objective would have been higher. We also analyzed DOD’s fiscal year 2000 contracting database (DD350) to identify the types of contracts awarded at each phase of the acquisition cycle, the percentage of dollars awarded using the various profit award mechanisms, and the proportion of dollars awarded with or without competition. To assess the relationship between the new profit policy and the new acquisition process, we analyzed what the acquisition guidance and profit policy say about technology development, maturation, and innovation. We also discussed these policies with DOD officials who developed them. We reviewed relevant documents and held discussions with officials at the U.S. Army Aviation and Missile Command, Huntsville, Alabama; Aeronautical Systems Center, Dayton, Ohio; Naval Air Systems Command, Patuxent River, Maryland; Office of Defense Procurement, Cost, Pricing, and Finance, Washington, D.C.; and Office of the Deputy Under Secretary of Defense (Acquisition Reform) for Acquisition, Technology, and Logistics, Washington, D.C. We performed our review between January 2001 and May 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies of this report to the appropriate congressional committees; the Honorable Donald H. Rumsfeld, Secretary of Defense; and the Honorable Mitchell E. Daniels Jr., Director, Office of Management and Budget. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are Karen Zuckerstein, Erin Baker, Julia Kennon and John Van Schaik. The following table shows the impact of using the technology incentive on a sample contract repriced for us at one of the buying commands we visited. The actual profit objective calculated prior to the profit policy change was based on technical performance risk valued at the top of the standard range. The repricing to reflect what would likely have occurred after the profit policy change was based on technical performance risk valued at the top of the technology incentive range. No other changes were made in the profit objective calculation.
In negotiating profit on contracts, the Department of Defense (DOD) requires contracting officers to set negotiating objectives by relying on guidelines in defense regulations. Congress mandated that DOD review its profit guidelines and consider whether modifying them would provide more incentive for contractors to develop and produce complex and innovative new technologies for weapon systems. After completing its review, DOD issued a final rule in December 2000 that added a technology incentive to its guidelines for setting profit objectives on negotiated defense contracts. This report reviews whether the new policy is (1) likely to achieve its intended objective of stimulating increased innovation and (2) consistent with the revised policies for acquiring weapons systems. GAO found that the new profit policy may have limited effect on incentivizing additional innovation because the policy has limited reach during research and development and it does not provide adequate guidance on when to apply the incentive. The policy may not reinforce DOD's emphasis on technology maturity in its guidance on the system acquisition process.
Public and private organizations rely on computer systems to transfer increasing amounts of money; sensitive, proprietary economic and commercial information; and classified and sensitive but unclassified defense and intelligence information. The increased transfer of critical information increases the risk that malicious individuals will attempt to disrupt or disable our nation’s critical infrastructures and obtain sensitive and critical information for malicious purposes. To address the threats to the nation’s cyber-reliant critical infrastructure, federal policy emphasizes the importance of public-private coordination. Different types of cyber threats from numerous sources may adversely affect computers, software, a network, an agency’s operations, an industry, or the Internet itself. Cyber threats can be unintentional or intentional. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. Intentional threats include both targeted and untargeted attacks. Attacks can come from a variety of sources, including criminal groups, hackers, and terrorists. Table 1 lists sources of threats that have been identified by the U.S. intelligence community and others. Different types of cyber threats can use various cyber exploits that may adversely affect computers, software, a network, an agency’s operations, an industry, or the Internet itself (see table 2). Groups or individuals may intentionally deploy cyber exploits targeting a specific cyber asset or attack through the Internet using a virus, worm, or malware with no specific target. Recent reports of cyber attacks illustrate that such attacks could have a debilitating impact on national and economic security and on public health and safety. In May 2007, Estonia was the reported target of a denial-of-service cyber attack with national consequences. The coordinated attack created mass outages of its government and commercial Web sites. In March 2008, the Department of Defense (DOD) reported that, in 2007, computer networks operated by the department, other federal agencies, and defense-related think tanks and contractors were targets of computer network intrusion. Although those responsible were not definitively identified, the attacks appeared to have originated in China. In January 2010, it was reported that at least 30 technology companies— most in Silicon Valley, California—were victims of intrusions. The cyber attackers gained unauthorized access to files that may have included the companies’ computer security systems, crucial corporate data, and software source code. In January 2010, a California-based company filed suit alleging that two Chinese companies stole software code and then distributed it to tens of millions of end users as part of Chinese government-sponsored filtering software. The company is seeking more than $2.2 billion dollars. Academic researchers found that portions of the company’s software code had been copied and used in initial versions of the Chinese software. Based on an 8-month investigation, researchers reported that computer systems in India were attacked. The suspected cyberattackers remotely connected to Indian computers using social networks to install bot- networks that infiltrated and infected Indian computers with malware. The incidents were reported to have been traced back to an underground espionage organization that was able to steal sensitive national security and defense information. Federal law and policy call for critical infrastructure protection activities that are intended to enhance the cyber and physical security of both the public and private infrastructures that are essential to national security, national economic security, and national public health and safety. Federal policies address the importance of coordination between the government and the private sector to protect the nation’s computer-reliant critical infrastructure. These policies establish critical infrastructure sectors, assign agencies to each sector (sector lead agencies), and encourage private sector involvement. For example, the Department of the Treasury is responsible for the banking and finance sector, while the Department of Energy (DOE) is responsible for the energy sector. Table 3 lists agencies and their assigned sector. In May 1998, Presidential Decision Directive 63 (PDD-63) established critical infrastructure protection (CIP) as a national goal and presented a strategy for cooperative efforts by the government and the private sector to protect the physical and cyber-based systems essential to the minimum operations of the economy and the government. Among other things, this directive encouraged the development of ISACs to serve as mechanisms for gathering, analyzing, and disseminating information on cyber infrastructure threats and vulnerabilities to and from owners and operators of the sectors and the federal government. For example, the Financial Services, Electricity Sector, IT, and Communications ISACs represent sectors or subcomponents of sectors. However, not all sectors have ISACs. For example, according to private sector officials, the DIB sector and the subcomponents of the energy sector, besides electricity, do not have established ISACs. The Homeland Security Act of 2002 created the Department of Homeland Security (DHS). In addition, among other things, it assigned the department the following CIP responsibilities: (1) developing a comprehensive national plan for securing the key resources and critical infrastructures of the United States; (2) recommending measures to protect the key resources and critical infrastructures of the United States in coordination with other groups; and (3) disseminating, as appropriate, information to assist in the deterrence, prevention, and preemption of or response to terrorist attacks. In 2003, The National Strategy to Secure Cyberspace was issued, which assigned DHS multiple leadership roles and responsibilities in this CIP area. They include (1) developing a comprehensive national plan for CIP, including cybersecurity; (2) developing and enhancing national cyber analysis and warning capabilities; (3) providing and coordinating incident response and recovery planning, including conducting incident response exercises; (4) identifying, assessing, and supporting efforts to reduce cyber threats and vulnerabilities, including those associated with infrastructure control systems; and (5) strengthening international cyberspace security. PDD-63 was superseded in December 2003 when Homeland Security Presidential Directive 7 (HSPD-7) was issued. HSPD-7 defined additional responsibilities for DHS, federal agencies focused on specific critical infrastructure sectors (sector-specific agencies), and other departments and agencies. HSPD-7 instructs these sector-specific agencies to identify, prioritize, and coordinate the protection of critical infrastructure to prevent, deter, and mitigate the effects of attacks. HSPD-7 makes DHS responsible for, among other things, coordinating national CIP efforts and establishing uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors. As part of its implementation of the cyberspace strategy and other requirements to establish cyber analysis and warning capabilities for the nation, DHS established the United States Computer Emergency Readiness Team (US-CERT) to help protect the nation’s information infrastructure. US-CERT is the focal point for the government’s interaction with federal and private-sector entities 24-hours-a-day, 7-days-a-week, and provides cyber-related analysis, warning, information-sharing, major incident response, and national-level recovery efforts. It is charged with aggregating and disseminating cybersecurity information to improve warning of and response to incidents, increasing coordination of response information, reducing vulnerabilities, and enhancing prevention and protection. In addition, the organization is to collect incident reports from all federal agencies and assist agencies in their incident response efforts. It is also to accept incident reports when voluntarily submitted by other public and private entities and assist them in their response efforts, as requested. In addition, as part of its responsibilities, DHS first issued the NIPP in 2006 and then updated it in 2009. The NIPP is intended to provide the framework for a coordinated national approach to address the full range of physical, cyber, and human threats and vulnerabilities that pose risks to the nation’s critical infrastructure. The NIPP relies on a sector partnership model as the primary means of coordinating government and private sector CIP efforts. Under this model, each sector has both a government council and a private sector council to address sector-specific planning and coordination. The government and private sector councils are to work in tandem to create the context, framework, and support for coordination and information-sharing activities required to implement and sustain that sector’s CIP efforts. The council framework allows for the involvement of representatives from all levels of government and the private sector, so that collaboration and information-sharing can occur to assess events accurately, formulate risk assessments, and determine appropriate protective measures. The government councils are to coordinate strategies, activities, policies, and communications across government entities within each sector. Each government council is to be composed of representatives from various levels of government (i.e., federal, state, local, and tribal) as appropriate to the security needs of each individual sector. In addition, a representative from the sector-specific agency is to chair the council and is to provide cross-sector coordination with each of the member governments. For example, DOE in its role as the sector-specific agency for the energy sector has established and chairs a government council. The establishment of private sector councils (sector councils) is encouraged under the NIPP model, and these councils are to be the principal entities for coordinating with the government on a wide range of CIP activities and issues. Under the model, critical asset owners and operators are encouraged to be involved in the creation of sector councils that are self-organized, self-run, and self-governed, with a spokesperson designated by the sector membership. Specific membership can vary from sector to sector but should be representative of a broad base of owners, operators, associations, and other entities—both large and small—within the sector. For example, the banking and finance sector has established the Financial Services Sector Coordinating Council for Critical Infrastructure Protection and Homeland Security, which is made up of over 40 entities, including banks, insurance companies, and industry associations. Most recently, the White House issued the Cyberspace Policy Review that, among other things, recommended that the White House appoint a cybersecurity policy official for coordinating the nation’s cybersecurity policies and activities. Subsequently, in December 2009, the President appointed a Special Assistant to the President and Cybersecurity Coordinator, referred to as the Cybersecurity Coordinator in this report, to be the central coordinator of federal government cybersecurity-related activities. Using the NIPP partnership model, the private and public sectors coordinate to manage the risks related to cyber CIP. This coordination includes sharing information, conducting exercises, and providing resources. Sharing information. Information sharing enables both government and private sector partners to assess events accurately, formulate risk assessments, and determine appropriate courses of action. This includes sharing information on cyber threats and vulnerabilities, providing alerts or warnings about such threats, and recommending mitigation steps. Conducting exercises. Building and maintaining organizational and sector expertise requires comprehensive exercises to test the interaction between stakeholders in the context of serious cyber attacks, terrorist incidents, natural disasters, and other emergencies. Exercises are conducted by private sector owners and operators, and across all levels of government. Providing resources. Maximizing the efficient use of resources is a key part of protecting the nation’s critical infrastructure. This includes providing technical and policy expertise, training, commitment of people, and financial aid through grants. Over the last several years, we have reported and made recommendations regarding various aspects of cyber CIP, including identifying information- sharing practices and bolstering the public-private partnership. In 2001, we identified the information-sharing practices of leading organizations and the factors they deemed critical to their success in building successful information-sharing relationships. All of the organizations identified trust as the essential underlying element to successful relationships and said that trust could be built only over time and, primarily, through personal relationships. Other critical success factors identified included (1) establishing effective and appropriately secure communication mechanisms, such as regular meetings and secure Web sites; (2) obtaining the support of senior managers at member organizations regarding the sharing of potentially sensitive member information and the commitment of resources; and (3) ensuring organizational leadership continuity. In addition, to be successful, information-sharing organizations provided identifiable membership benefits, such as current information about threats, vulnerabilities, and incidents. Without such benefits, according to the representatives we met with, members would not continue participating. Over the last several years, we have also made about 30 recommendations in key cybersecurity areas to help bolster private-public partnerships. In 2008, we reported on US-CERT and found that it faced a number of challenges that impeded it from fully implementing a cyber analysis and warning capability and thus being able to coordinate the national efforts to prepare for, prevent, and respond to cyber threats. The challenges included creating warnings that are consistently actionable and timely and employing predictive analysis. We made 10 recommendations to DHS to improve the department’s cyber analysis and warning capabilities. These included, among others, addressing deficiencies in its monitoring efforts, including establishing a comprehensive baseline understanding of the nation’s critical information infrastructure and engaging appropriate private-sector stakeholders to support a national-level cyber monitoring capability. We also recommended that DHS address the challenges that impeded it in fully implementing cyber analysis and warning, including developing close working relationships with federal and private-sector entities to allow the free flow of information and ensuring consistent notifications that are actionable and timely. DHS agreed with most of these recommendations and initiated related actions. In 2007 and 2009, we determined the extent to which sector plans for CIP fully addressed DHS’s cyber security requirements and assessed whether these plans and related reports provided for effective implementation. We found, among other things, that although DHS reported many efforts under way and planned to improve the cyber content of sector-specific plans, sector-specific agencies had yet to update their respective sector- specific plans to fully address key DHS cybersecurity criteria. The lack of complete updates and progress reports was further evidence that the sector planning process had not been effective, thus leaving the nation in the position of not knowing precisely where it stands in securing cyber- critical infrastructures. Not following up to address these conditions also showed DHS was not making sector planning a priority. We recommend that DHS assess whether the existing sector-specific planning process should continue to be the nation’s approach to securing cyber and other ed critical infrastructure and, if so, make the process an agency priority and manage it accordingly. DHS concurred with the recommendations. In addition, due to concerns about DHS’s efforts to fully implement its CIP responsibilities, as well as known security risks to critical infrastructu re systems, we added cyber CIP as part of our federal IT systems security high-risk area in 2003 and have continued to report on its status s time. Most recently, we testified in 2009 on the results of expert panels that identified the importance of bolstering public-private partnershi discussions with us, the panel identified 12 key areas requiring improvement. One of the key strategies was to bolster public-private partnerships by providing adequate economic and other incentives for greater investment and partnering in cybersecurity. Private sector stakeholders reported that they expect their federal partners to provide usable, timely, and actionable cyber threat information and alerts, access to sensitive or classified information, a secure mechanism for sharing information, security clearances, and a single centralized government cybersecurity organization to coordinate federal efforts. Some other services were less important, such as penetration testing of networks and financial support. Table 4 summarizes the extent to which the 56 private sector survey respondents expect to receive certain services from the federal government in order of most to least expected. The two most expected services private sector stakeholders want from their federal partners are timely and actionable cyber threat and alert information—providing the right information to the right persons or groups as early as possible to give them time to take appropriate action. The percentages of private sector survey respondents reporting that they expect timely and actionable cyber threat and alert information to a great or moderate extent were 98 and 96, respectively. Private sector council representatives stated that they expect their federal partners to provide timely and actionable intelligence on cyber-related issues that they can share within their membership. For example, one private sector official told us that time is of the essence when passing information to their members and that sector members expect to get a response within minutes so they can take appropriate actions as soon as possible. Private sector stakeholders also identified access to sensitive government information, a secure information-sharing mechanism, and obtaining security clearances as key expectations. The percentages of survey respondents reporting that they expect these services to a great or moderate extent were 87, 78, and 74, respectively. Private sector officials stated that they need access to greater amounts of sensitive and classified government information. However, a private sector official indicated that access to classified information is not valuable because it can not be shared. This official stated that they would prefer information that is unclassified and actionable that can be shared. A private sector council member stated that their federal partners take too long to vet sensitive cyber information before private sector partners can receive and share it. In addition, private sector officials and cyber experts stated that having a single or centralized government source for cyber-related information is important to (1) avoid confusion about who is the authoritative source, (2) have a consistent message communicated, and (3) coordinate a national response. Similarly, in March 2009, we testified that a panel of cyber security experts identified that creating an accountable, operational cybersecurity organization would be essential to improving our national cybersecurity posture. The experts told us that there needs to be an independent cybersecurity organization that leverages and integrates the capabilities of the private sector, civilian government, law enforcement, the military, the intelligence community, and the nation’s international allies to address incidents against the nation’s critical cyber systems and functions. Conversely, private sector survey respondents stated that they expect some services to a lesser extent from their federal partners, including policy expertise, financial support, and penetration testing of their networks. The percentages of survey respondents reporting that they expect these services to a great or moderate extent were only 29, 26, and 25, respectively. In addition, government officials stated that having the government perform penetration testing could be construed as inappropriate by private entities and their customers whose information is stored on those systems. Federal partners are not consistently meeting private sector expectations, including providing timely and actionable cyber threat information and alerts, according to private sector stakeholders. Table 5 illustrates the degree to which the 56 private sector survey respondents reported that they are receiving services from the public sector in order of most to least expected. For example, only 27 percent of private sector survey respondents reported that they were receiving timely and actionable cyber threat information and alerts to a great or moderate extent. In addition, ISAC officials stated that the federal partners are not providing enough cyber threat information that is tailored to their sector’s needs or analytical alert information that provides the tactics and techniques being used by cyber threats. According to these ISAC officials, this more specific information is needed to understand what actions will likely protect their networks. Another private sector council official said that a lot of the information they receive does not have enough detail to be useful. Private sector stakeholders also reported a lack of access to classified information, a secure information-sharing mechanism, security clearances, and a single centralized government cyber-information source. Private sector survey respondents reported receiving access to actionable classified information, having access to a secure information sharing mechanism, and having adequate security clearances to a great or moderate extent at only 16, 21, and 33 percent, respectively. The private sector councils reported that they are not getting classified intelligence information that they perceive as being valuable to their efforts to defend their cyber resources from sophisticated attacks and that they do not have enough members with security clearances to receive classified information. Regarding the lack of a centralized source, an ISAC official stated that too many Internet-based information-sharing portals exist in the current cyber-related, public-private partnership and that the partnership could benefit from a “one-stop” portal. Another official suggested that one federal agency should be the clearing house for information and assigning tasks because there are too many government agencies working independently with their own unique missions. Further, a sector council official stated that there is too much duplication of projects and that it is not uncommon to work with six different groups doing almost the same thing and that these groups are not always aware of each other. Federal partners are not meeting private sector stakeholders’ expectations, in part, because of restrictions on the type of information that can be shared with the private sector. According to DHS officials, US- CERT’s ability to provide information is impacted by restrictions that do not allow individualized treatment of one private sector entity over another private sector entity—making it difficult to formally share specific information with entities that are being directly impacted by a cyber threat. In addition, because US-CERT serves as the nation’s cyber analysis and warning center, it must ensure that its warnings are accurate. Therefore, US-CERT’s products are subjected to a stringent review and revision process that can adversely affect the timeliness of its products— potentially adding days to the release if classified or law enforcement information must be removed from the product. In addition, federal officials are restricted to sharing classified information with only cleared private sector officials. Federal officials are also hesitant to share sensitive information with private sector stakeholders, in part, due to the fear that sensitive information shared with corporations could be shared openly on a global basis. By contrast, DOE officials stated that they are willing to share sensitive information with their energy sector member entities due to the long-standing nature of their relationships with the sector and the type of information being shared. In addition, according to federal officials, the limited number of private sector personnel with national security clearances makes it difficult to share classified information. Another issue having an adverse affect on the federal partners’ ability to meet private sector expectations is that federal officials do not have an adequate understanding of the specific private sector information requirements. Multiple private sector officials stated that federal partners could improve their methods of acquiring the type of information needed by the private sector. For example, more specific threat information would be focused on the technology being used by a particular entity or specify that a threat intends to target a particular entity, rather than including just broad threat information and alerts. In addition, this more specific information would focus on the specific needs for each sector rather than all of the sectors getting the same information. A private sector official also stated that the federal government often approaches the private sector on issues that are not a priority to the private sector but are issues the federal government thinks the private sector is interested in. Further, a cyber expert suggested that the partnership can improve if the government articulates what it needs from the private sector and assists the critical infrastructure sectors in understanding the direct benefit of their participation. DOD and DHS have started pilot programs that are intended to improve the sharing of timely, actionable, and sensitive information with their private sector partners. Specifically, DOD’s Defense Critical Infrastructure Program has a pilot program with some of its private sector DIB contractors to improve sharing of information on cyber threat, alerts, and sensitive data by establishing a new partnership model. This new program is known as the DIB Cyber Security/Information Assurance Program and is to facilitate the sharing of sensitive cyber information between the public and private sector. According to an agency official, this program involves a voluntary agreement between DOD and cleared DIB partners. DOD shares classified and unclassified cyber threat information and best practices. In return, the private sector partners agree to share cyber intrusion information with the DOD Cyber Crime Center, which is to serve as the focal point for information-sharing and digital forensics analysis activities related to protecting unclassified information on DIB information systems and networks. DOD’s goal is to transition from pilot to program status and expand the program to all qualified cleared contractors. In addition, the officials stated that they expect to eventually modify DOD contractual language to encourage contractors to increase cybersecurity in their networks. In addition, DHS, in conjunction with DOD and the financial services sector, has developed an information sharing pilot program which began in December 2009. To date, this program has resulted in the federal government sharing 494 of its products, including sensitive information, with the Financial Services ISAC, and the Financial Services ISAC sharing 135 of its products with the government. According to DHS officials, DHS and the Financial Services ISAC are sharing sensitive information they did not share before the agreement. Both of these pilot programs are intended to improve federal partners’ ability to share information over a secure mechanism. For example, DHS is using its US-CERT portal, and DOD is developing a DIB Net to communicate with its partners. DHS and DOE have initiatives that specifically address sharing classified information with their partners. DHS officials stated that DHS has a process for clearing individual sector officials at the top secret and sensitive compartmented information levels. Further, in November 2009, DHS issued the Cybersecurity Partner Local Access Plan to improve the sharing of sensitive information between the public and private sectors. According to DOE officials, DOE also has an effort under way to increase the number of private officials from the energy sector with security clearances. DHS has recently developed an integration center known as the National Cybersecurity and Communications Integration Center that is composed of the US-CERT and the National Coordinating Center for Telecommunications. This center is to provide a central place for the various federal and private-sector organizations to coordinate efforts to address cyber threats and to respond to cyber attacks. However, this center was only established in October 2009, is still in development, and does not currently have representation from all relevant federal agencies and private entities as envisioned. In addition, DHS officials stated that they have taken steps to improve US-CERT’s cyber analysis and warning capabilities in response to our previous recommendations. While the ongoing efforts may address the public sector’s ability to meet the private sector’s expectations, much work remains, and it is unclear if the efforts will focus on fulfilling the private sector’s most expected services related to information-sharing. If the government does not improve its ability to meet the private sector’s expectations, the partnerships will remain less than optimal, and the private sector stakeholders may not have the appropriate information and mechanisms needed to thwart sophisticated cyber attacks that could have catastrophic effects on our nation’s cyber-reliant critical infrastructure. Public sector stakeholders reported that they expect the private sector to provide a commitment to execute plans and recommendations, timely and actionable cyber threat information, and appropriate staff and resources. Four of the five government councils reported that the private sector is committed to executing plans and recommendations and providing timely and actionable threat information to a “great” or “moderate” extent. However, government council officials stated that improvements could be made to the partnership. Public sector stakeholders reported that they expect a commitment to execute plans and recommendations, timely and actionable cyber threat information, and appropriate staff and resources to be provided by private sector stakeholders. All five government councils we met with stated that they expected these services from their private sector partners to a “great” or “moderate” extent. Further, most government council representatives stated that they expect better communications and increasing trust between them and their private sector counterparts. For example, they would like the private sector to develop a strong dialogue with the government and keep the government informed about suspicious activities on private sector networks. Table 6 shows the government councils’ expected services. While many government councils reported that the private sector is mostly meeting their expectations in several areas, they also reported that improvements could be made. Four of the five government councils stated that they are receiving commitment to execute plans and recommendations and timely and actionable cyber threat information to a great or moderate extent. However, only two of the five government councils reported that the private sector is providing appropriate staff and resources. In addition, the extent to which the private sector is fulfilling the public sector’s expectations varies by sector. Of the five councils, the communications government council reported most positively on whether the private sector was providing expected services. Specifically, it reported that its private sector partners were providing 8 of 10 expected services to a great or moderate extent. By contrast, the IT sector council reported that the private sector was providing only 1 of 10 expected services to a great or moderate extent and 5 of 10 expected services to only some extent. Table 7 shows the extent to which the private sector is providing government councils’ expected services. Although, in general, the private sector is meeting the expectations of the federal partners, there are still improvements that can be made. For example, while the government coordinating councils reported receiving timely and actionable cyber threat and alert information from the private sector, there are limits to the depth and specificity of the information provided, according to federal officials. One issue is that private sector stakeholders do not want to share their sensitive, proprietary information with the federal government. In addition, information security companies could lose a competitive advantage by sharing information with the government which, in turn, could share it with those companies’ competitors. In addition, according to DHS officials, despite special protections and sanitization processes, private sector stakeholders are unwilling to agree to all of the terms that the federal government or a government agency requires to share certain information. Further, in some cases, the lack of private sector commitment has had an adverse affect on the partnership. The private-public partnership remains a key part of our nation’s efforts to secure and protect its critical cyber-reliant infrastructure. For more than a decade, this private-public partnership has been evolving. While both private and public sector stakeholders report finding value in the partnership, the degree to which expectations are being met varies. Private sector stakeholders expect their federal partners to consistently provide usable, timely, actionable cyber threat information and alerts and, to a lesser extent, other related services. However, private sector stakeholders are not consistently receiving their expected services from their federal partners because, in part, federal partners are restricted in the type of information that can be shared with the private sector and lack an understanding about each sector’s specific information requirements. In addition, many private sector stakeholders interact with multiple federal entities and multiple information sources, which can result in duplication of efforts and inconsistent information being shared. In turn, federal partners primarily expect their private sector partners to provide commitment to execute plans and recommendations, timely and actionable cyber threat and alert information, and appropriate staff and resources, which the private sector is primarily providing; however, while most federal partners stated that these expectations are mostly being met, they identified difficulties with the private sector sharing their sensitive information and the need for private sector partners to improve their willingness to engage and provide support to partnership efforts. Federal and private sector partners have initiated efforts to improve the partnerships; however, much work remains to fully implement improved information sharing. Without improvements in meeting private and public sector expectations, the partnerships will remain less than optimal, and there is a risk that owners of critical infrastructure will not have the appropriate information and mechanisms to thwart sophisticated cyber attacks that could have catastrophic effects on our nation’s cyber-reliant critical infrastructure. We recommend that the Special Assistant to the President and Cybersecurity Coordinator and the Secretary of Homeland Security, in collaboration with the sector lead agencies, coordinating councils, and the owners and operators of the associated five critical infrastructure sectors, take two actions: (1) use the results of this report to focus their information-sharing efforts, including their relevant pilot projects, on the most desired services, including providing timely and actionable threat and alert information, access to sensitive or classified information, a secure mechanism for sharing information, and providing security clearance and (2) bolster the efforts to build out the National Cybersecurity and Communications Integration Center as the central focal point for leveraging and integrating the capabilities of the private sector, civilian government, law enforcement, the military, and the intelligence community. We are not making new recommendations regarding cyber-related analysis and warning at this time because our previous recommendations directed to DHS, the central focal point for such activity, in these areas have not yet been fully implemented. The national Cybersecurity Coordinator provided no comments on a draft of our report. DHS provided written comments on a draft of the report (see app. II), signed by DHS’s Director of the Departmental GAO/OIG Liaison Office. In its comments, DHS concurred with our recommendations and described steps underway to address them. Regarding our first recommendation, DHS provided an additional example of and further detail about several pilot programs it has initiated to enable the mutual sharing of cybersecurity information at various classification levels. In addition, regarding our second recommendation, DHS stated that it is integrating government components and private sector partners into its National Cybersecurity and Communications Integration Center. DHS also provided general comments. First, DHS noted that it is important to distinguish between actionable information and classified, contextual threat information. Specifically, DHS stated that sharing classified information with the private sector can pose a risk to national security and, consequently, such information is generally non-actionable. While we found that the private-sector stakeholders we surveyed and interviewed expect such information, we do not state that the federal government should share classified information with uncleared individuals. We distinguish in this report between sharing timely and actionable threat and alert information and providing access to classified information. In addition, we discuss US-CERT’s review and revision process and identify DHS, DOD, and DOE efforts to provide clearances to private sector partners in order to share such information. Second, DHS stated that the report makes generalizations about private- sector stakeholders which could be seen to suggest that such views were held across the entire cross-sector community. We acknowledge that our findings cannot be generalized across the sectors and clearly articulate that the scope of our review is limited to representatives from five critical infrastructure sectors. Third, DHS also stated that the report focuses on surveyed participants “expectations,” while the survey itself focused on “needs.” DHS further stated that these two terms are not interchangeable for the concept of information sharing. During our review, we held numerous structured interviews with private and government stakeholders and surveyed private-sector stakeholders and asked separate questions on their expectations and needs. We acknowledge that the terms are not interchangeable and therefore appropriately reported on and distinguished both private and public sectors’ expectations and needs. Finally, DHS provided comments on the progress it has made in its sector planning approach and its clearance process. DHS and DOD also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the national Cybersecurity Coordinator, the Secretary of Homeland Security, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to determine (1) private sector stakeholders’ expectations for cyber-related, public-private partnerships and to what extent these expectations are being met and (2) public sector stakeholders’ expectations for cyber-related public-private partnerships and to what extent expectations are being met. We focused our efforts on five critical infrastructure sectors: Communications, Defense Industrial Base, Energy, Banking and Finance, and Information Technology. We selected these five sectors because of their extensive reliance on cyber- based assets to support their operations. This determination was based on our analysis and interviews with cybersecurity experts and agency officials. Our findings and conclusions are based on information gathered from the five cyber-reliant critical sectors and are not generalizable to a larger population. To determine private sector stakeho public-private partnerships and to what extent these expectations are being met, we collected and analyzed various government and private sector reports and conducted structured interviews with sector coordinating councils representatives from the five critical infrastructure sectors. In addition, we interviewed additional experts in critical infrastructure protection from academia and information technology and security companies to gain a greater understanding of how the partnership should be working. We also interviewed representatives from the Communications, Electricity Sector, Financial Services, Information Technology, and Multi-State Information Sharing and Analysis Centers to understand their information-sharing needs. Finally, we conducted a survey of private sector representatives from the infrastructure sectors. The surveyed representatives were members of the information sharing and analysis centers, sector coordinating councils, associations within a sector, and/or owner/operators within a sector. These surveyed representatives were solicited by the leadership of those organizations to participate in our survey in order for them to fulfill their responsibility to protect the identity of their members. We administered the survey respondents’ use of the electronic survey tool. We received 56 survey responses from across the five sectors. The survey results were used to determine the expectations of private sector stakeholders and the extent to which those expectations were being met. lders’ expectations for cyber-related To determine public sector stakeholders’ expectations for cyber-related public-private partnerships and to what extent these expectations are being met, we collected and analyzed various government and private sector reports and conducted structured interviews with government coordinating councils representatives familiar with the cyber partnership from the Banking and Finance, Communications, Defense Industri Energy, and Information Technology critical infrastructure sectors. We also met with representatives from DHS’s National Cyber Security Division and Office of Infrastructure Protection to verify and understand the public sector’s role in partnering with the private sector and encouraging the protection of the nation’s cyber critical infrastructure. We conducted this performance audit from June 2009 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Michael W. Gilmore, Assistant Director; Rebecca E. Eyler; Wilfred B. Holloway; Franklin D. Jackson; Barbarol J. James; Lee A. McCracken; Dana R. Pon; Carl M. Ramirez; Jerome T. Sandau; Adam Vodraska; and Eric D. Winter made key contributions to this report.
Pervasive and sustained computer-based attacks pose a potentially devastating impact to systems and operations and the critical infrastructures they support. Addressing these threats depends on effective partnerships between the government and private sector owners and operators of critical infrastructure. Federal policy, including the Department of Homeland Security's (DHS) National Infrastructure Protection Plan, calls for a partnership model that includes public and private councils to coordinate policy and information sharing and analysis centers to gather and disseminate information on threats to physical and cyber-related infrastructure. GAO was asked to determine (1) private sector stakeholders' expectations for cyber-related, public-private partnerships and to what extent these expectations are being met and (2) public sector stakeholders' expectations for cyber-related, public-private partnerships and to what extent these expectations are being met. To do this, GAO conducted surveys and interviews of public and private sector officials and analyzed relevant policies and other documents. Private sector stakeholders reported that they expect their federal partners to provide usable, timely, and actionable cyber threat information and alerts; access to sensitive or classified information; a secure mechanism for sharing information; security clearances; and a single centralized government cybersecurity organization to coordinate government efforts. However, according to private sector stakeholders, federal partners are not consistently meeting these expectations. For example, less than one-third of private sector respondents reported that they were receiving actionable cyber threat information and alerts to a great or moderate extent. Federal partners are taking steps that may address the key expectations of the private sector, including developing new information-sharing arrangements. However, while the ongoing efforts may address the public sector's ability to meet the private sector's expectations, much work remains to fully implement improved information sharing. Public sector stakeholders reported that they expect the private sector to provide a commitment to execute plans and recommendations, timely and actionable cyber threat information and alerts, and appropriate staff and resources. Four of the five public sector councils that GAO held structured interviews with reported that their respective private sector partners are committed to executing plans and recommendations and providing timely and actionable information. However, public sector council officials stated that improvements could be made to the partnership, including improving private sector sharing of sensitive information. Some private sector stakeholders do not want to share their proprietary information with the federal government for fear of public disclosure and potential loss of market share, among other reasons. Without improvements in meeting private and public sector expectations, the partnerships will remain less than optimal, and there is a risk that owners of critical infrastructure will not have the information necessary to thwart cyber attacks that could have catastrophic effects on our nation's cyber-reliant critical infrastructure. GAO recommends that the national Cybersecurity Coordinator and DHS work with their federal and private sector partners to enhance information-sharing efforts. The national Cybersecurity Coordinator provided no comments on a draft of this report. DHS concurred with GAO's recommendations.
In the aftermath of September 11, 2001, there is heightened concern that terrorists may try to smuggle nuclear or radiological materials into the United States. These materials could be used to produce either an IND or an RDD. An IND is a crude nuclear bomb made with highly enriched uranium or plutonium. Nonproliferation experts estimate that a successful IND could have a yield in the 10 to 20 kiloton range (the equivalent to 10,000 to 20,000 tons of TNT). An IND with a 20-kiloton yield would have the same force as the equivalent of the yield of the bomb that destroyed Nagasaki; it could devastate the heart of a medium-sized U.S. city and result in thousands of casualties and radiation contamination over a wide area. Security experts have also raised concerns that terrorists could obtain radioactive material used in medicine, research, agriculture, and industry to construct an RDD, or dirty bomb. This radioactive material is encapsulated, or sealed in metal, such as stainless steel, titanium, or platinum, to prevent its dispersal and is commonly called a sealed radioactive source. These sealed sources are used throughout the United States and other countries in equipment designed to, among other things, diagnose and treat illnesses, preserve food, detect flaws in pipeline welds, and determine the moisture content of soil. Depending on their use, sealed sources contain different types of radioactive material, such as strontium- 90, cobalt-60, cesium-137, plutonium-238, and plutonium-239. If these sealed sources fell into the hands of terrorists, they could use them to produce a simple, but potentially dangerous weapon, by packaging explosives, such as dynamite, with the radioactive material, which would be dispersed when the bomb went off. Depending on its type, amount, and form (powder or solid), the dispersed radioactive material could cause radiation sickness in people nearby and produce serious economic costs and the psychological and social disruption associated with the evacuation and subsequent cleanup of the contaminated area. While no terrorists have detonated a dirty bomb in a city, Chechen separatists placed a canister containing cesium-137 in a Moscow park in the mid-1990s. Although the device was not detonated and no radioactive material was dispersed, the incident demonstrated that terrorists have the capability and willingness to use radiological materials as weapons of terrorism. Another form of nuclear terrorism occurred with the release of radioactive materials in London. In November 2006, Alexander Litvinenko, a former officer of the Russian Federal Security Service, was poisoned with a gram of polonium-210—about the size of a grain of salt. His poisoning was detected only after he was hospitalized for a few weeks and tested for symptoms of radiation exposure because of hair loss. Following the poisoning, forensic investigators identified, with the help of the victim, 47 sites across London where he had been during the few days between his poisoning and death. Of these locations, about 20 showed signs of this radioactive material. Investigators identified over 900 people who might have been exposed to the polonium, including some who may have been exposed while aboard airplanes. After a thorough examination, a few of these individuals turned out to have significant exposure levels. The decontamination activities at these sites, including a hotel room, spanned 19 days, involved a number of methods and technologies, and cost in excess of $200,000. While state and local government responders would be expected to respond first to a terrorist incident within their jurisdiction, they would also expect that the federal government would be prepared to provide the necessary assistance for them to expedite the recovery from such an incident. Emergency management officials from 13 cities and the majority of their respective states indicated in our survey that they would rely on the federal government to conduct and fund all or almost all analysis and cleanup activities associated with recovering from an RDD or IND incident of the magnitude described in the National Planning Scenarios. However, when asked which federal agencies they would turn to for this assistance, city and state respondents replied inconsistently and frequently listed several federal agencies for the same activity. In our view, these responses indicate that there is confusion among city and state officials regarding federal responsibilities for these activities in the event of a terrorist incident. This confusion, if not addressed, could hamper the timely recovery from an RDD or IND incident. Emergency management officials from all the cities and most of their respective states told us they would rely on the federal government because their technical and financial resources would be overwhelmed by a large RDD incident—and certainly by an IND incident. Most of these officials believe they could adequately address a smaller RDD incident, such as one that is confined to a city block or inside a building. Despite this anticipated reliance on the federal government, we obtained mixed responses as to whether these RDD and IND recovery activities should be primarily a federal responsibility. Fewer than half of the respondents from the cities (6 of 13), but most of those from states (8 of 10) indicated that it should be primarily a federal responsibility. The others stressed the need for shared responsibilities with the federal government. Despite the anticipated reliance by city and state governments on the federal government for analysis and cleanup activities following an RDD or IND incident, FEMA has not developed a national disaster recovery strategy or related plans to guide involvement of federal agencies in these recovery activities, as directed by federal law and executive guidance. To date, much federal attention has been given to developing a response framework, with less attention to recovery. The new FEMA coordinator for the development of a national disaster recovery strategy told us that while the previous administration had drafted a “white paper” addressing this strategy, the new administration has decided to rethink the entire approach. She also told us that FEMA recognizes its responsibility to prepare a national disaster recovery strategy but she could not provide a time frame for its completion. However, she stated that when a recovery strategy is issued it should provide guidance to revise state, local, and other federal agency operational plans to fulfill their respective responsibilities. Moreover, the FEMA official in charge of planning told us that the agency has put on hold issuing component plans that describe how federal capabilities would be integrated to support state and local planning for response to and recovery from RDD and IND incidents. Some existing federal guidance documents addressing the assets and responsibilities of federal agencies for both response and to a lesser extent recovery-related activities have been issued as annexes to the National Response Framework and in other documents. For example, there is a nuclear and radiological incident annex, which describes the policies, situations, concept of operations, and responsibilities of the federal departments and agencies for the immediate response and short-term recovery from incidents involving the release of radiological materials. There are also emergency support function annexes that provide a structure for coordinating federal interagency support in response to domestic incidents. In addition, two other sources of guidance have been issued that, according to FEMA officials, represent stop-gap measures until it can issue more integrated planning guidance. In 2008, FEMA issued updated guidance for protection and recovery following RDD and IND incidents. This guidance was to provide some direction to federal, state, and local emergency response officials in developing operational plans and response protocols for protection of emergency workers after such an incident. In regard to recovery, this document recommended a process to involve the affected public, state and local officials, and other important stakeholders in the identification of acceptable cleanup criteria, given the specifics of the incident. The other document, issued by the Homeland Security Council, pertains to responding to an IND in the first few days prior to the arrival of other necessary federal resources. This document was prepared because the prior FEMA guidance did not sufficiently prepare state and local emergency response authorities for managing the catastrophic consequences of a nuclear detonation. Moreover, DOE, EPA and DOD are developing more detailed operational guidance on their own based on the existing federal guidance. For example, DOE has supported research on operational guidelines for implementation of protective actions described in the FEMA guidance, EPA has drafted guidance for the optimization process following RDD and IND incidents, and DOD has established operational plans for consequence management following terrorist incidents, including RDD and IND attacks. Federal agencies and local jurisdictions have been using the available guidance as a basis for planning RDD and IND exercises to test the adequacy of their plans and skills in a real-time, realistic environment to evaluate their level of preparedness. We identified more than 70 RDD and IND response exercises planned and carried out by federal, state and local agencies since mid-2003. However, officials with FEMA’s National Exercise Directorate told us that only three of the RDD response exercises had a recovery component. According to these officials, recovery discussions following an RDD or IND response exercise have typically not occurred because of the time needed to fully address the response objectives of the exercise, which are seen as a higher priority. The most recent response exercise, based in Albany, New York, and planned by DOE, set aside 2 days for federal, state, and local agencies to discuss operational recovery issues. One unresolved operational issue discussed during this exercise pertained to the transition of the leadership of the Federal Radiological Monitoring and Assessment Center (FRMAC) from the initial analysis of the contaminated area, led by DOE, to the later cleanup phase, led by EPA. For example, there are remaining questions regarding the level and quality of the monitoring data necessary for EPA to accept the leadership of FRMAC. While we were told that this transitional issue has been discussed in exercises dating back to the development of the Federal Radiological Emergency Response Plan in 1984, it has only recently been discussed in RDD or IND response exercises. Another unresolved operational recovery issue pertains to the distribution of responsibilities for the ownership, removal, and disposal of radioactive debris from an RDD or IND incident. Both of these operational issues are to be examined again in the first full-scale RDD recovery exercise, planned and led by EPA, to take place April 2010. Although some federal agencies, such as DOE and EPA, have substantial experience using various cleanup methods and technologies to address radiation-contaminated areas, little is known about how these approaches might be applied in an RDD or IND incident. For example, DOE has invested hundreds of millions of dollars in research, development, and testing of methods and technologies for cleaning up and decommissioning contaminated structures and soils—legacies of the Cold War. In addition, since the passage of the Comprehensive Environmental Response, Compensation, and Liability Act in 1980, which established the Superfund program, EPA has undertaken significant efforts to study, develop, and use technologies that can address radioactive contamination. DOD has also played a major role in studying potential applications for innovative technologies for its Superfund sites. Not much is known, however, about the application to RDD and IND incidents of available cleanup methods and technologies because such an incident has never occurred in this country, although research is currently underway to gain a better understanding of potential applications. According to decontamination experts at Lawrence Livermore National Laboratory, current research has focused on predicting the effects of radiation release in urban settings through simulation, small scale testing, and theory. In addition, researchers at EPA’s National Homeland Security Research Center informed us that while there are standard methods and technologies for cleaning up radiation-contaminated areas, more research is needed to develop standard national guidance for their application in urban environments. The lack of guidance for identifying cost-effective cleanup methods and technologies in the event of an RDD or IND incident might mean that the cleanup approach taken could unnecessarily increase the cost of recovery. According to a decontamination expert at Idaho National Laboratory, for example, experience has shown that not selecting the appropriate decontamination technologies can generate waste types that are more difficult to remove than the original material and can create more debris requiring disposal—leading to increased costs. Moreover, he told us that without guidance and discussion early in the response phase, a contractor might use an approach for no other reason than it was used before in an unrelated situation. In addition, the Lawrence Livermore National Laboratory decontamination experts told us that decontamination costs can increase dramatically depending on the selection of an initial approach and the length of time before remediation actions are taken. For example, they said that the conventional use of high pressure water hosing to decontaminate a building is effective under normal conditions but could be the wrong cleanup approach for an RDD using cesium-137 because the force of the water would actually cause this radioactive isotope to penetrate even further into porous surfaces. A senior EPA official with the Office of Radiation and Indoor Air told us that studies are currently underway to determine the efficacy of pressure washing for removing contamination from porous urban surfaces. In addition to the lack of knowledge about the application of cleanup methods and technologies for wide-area urban contamination from an RDD or IND incident, there are also limitations in federal capabilities to handle in a timely manner the magnitude of tasks and challenges that would be associated with these incidents. For example, we found that limitations in federal capabilities to complete some analysis and cleanup activities might slow the recovery from an RDD or IND incident, including: (1) characterizing the full extent of areas contaminated with radioactive materials; (2) completing laboratory validation of contaminated areas and levels of cleanup after applying decontamination approaches; and (3) removing and disposing of radioactive debris and waste. Respondents representing most of the cities (9 of 13) and states (7 of 10), and respondents from most FEMA regional offices (6 of 9) and almost all EPA regional offices (9 of 10) expressed concerns about the capabilities of federal agencies to provide the assistance needed to complete the necessary analysis and cleanup activities in the event of an RDD or IND incident. Respondents from nearly all the cities and states we surveyed expressed the need for a national disaster recovery strategy to address gaps and overlaps in current federal guidance. According to one city official, “recovery is what it is all about.” In developing such a recovery strategy, respondents from the cities, like those from their states, want the federal government to consult with them in the initial formulation of a recovery strategy through working and focus groups, perhaps organized on a regional basis. Respondents representing most cities (10 of 13) and states (7 of 10) also provided specifics on the type of planning guidance necessary, including integration and clarification of responsibilities among federal, state, and local governments. For example, respondents from some of the cities sought better guidance on monitoring radioactivity levels, acceptable cleanup standards, and management of radioactive waste. Most respondents from cities expressed the need for greater planning interactions with the federal government and more exercises to test recovery plans. One city respondent cited the need for recovery exercises on a regional basis so the cities within the region might better exchange lessons learned. Respondents from most cities (11 of 13) and their states (7 of 10) said that they planned to conduct RDD and IND recovery exercises in the future. Finally, emergency management officials representing almost all cities and states in our survey offered some opinions on the need for intelligence information on RDD and IND threats. They said that sharing information with law enforcement agencies is necessary for appropriate planning for an RDD or IND incident—which they generally consider as low-level threats—but only half of the respondents indicated that they were getting sufficient intelligence information. Emergency management officials from FEMA and EPA regional offices generally concurred with these observations and suggestions of the city and state respondents. While it was more limited in scope than what is usually envisioned as an RDD incident, the aftermath of the 2006 polonium poisoning incident in London had many of the characteristics of an RDD including testing hundreds of people who may have been exposed to radiation and a cleanup of numerous radiation-contaminated areas. All this activity resulted from an amount of radioactive material the size of a grain of salt—many times smaller than the amount of radioactive material found in certain common medical devices that could be used in an RDD. Because of its experience in dealing with the cleanup from the 2006 polonium incident and other actions the United Kingdom has taken to prepare for an RDD or IND attack, we visited that country to examine its recovery preparedness programs. United Kingdom officials told us that the attention to recovery in their country is rooted in decades of experience with the conflict in Northern Ireland, dealing with widespread contamination from the Chernobyl nuclear power plant accident, and a national history of resilience—that is, the ability to manage and recover from hardship. We found that actions the United Kingdom reported taking to prepare for recovery from RDD and IND incidents are similar to many of the suggestions for improvement in federal preparedness that we obtained through our survey of city, state, and federal regional office emergency management officials in the United States. For example, we found that the United Kingdom reported taking the following actions: Enacted civil protection legislation in 2004, with subsequent non-statutory emergency response and recovery guidance to complement this emergency preparedness legislation. The emergency response and recovery guidance describes the generic framework for multi-agency response and recovery for all levels of government. The guidance emphasizes that response and recovery are not discrete activities and do not occur sequentially, rather recovery should be an integral part of response from the very beginning, as actions taken at all times can influence longer-term outcomes of the communities. Developed on-line, updatable national recovery guidance in 2007. This guidance reinforces and updates the early emergency response and recovery guidance by establishing, among other things, a recovery planning process during the response phase so that the potential impacts of early advice and actions are explored and understood for the future recovery of the affected areas. Issued a national handbook for radiation incidents in 2008. This handbook provides scientific information, including checklists for planning in advance of an incident, fact sheets on decontamination approaches, and advice on how to select and combine management of these approaches. Conducted a full-scale RDD recovery exercise in 2008. This exercise, involving several hundred participants, provided a unique opportunity to examine and test the recovery planning process within the urgency of a compressed time frame. The lessons learned from this exercise were incorporated into the United Kingdom’s recovery strategy. Issued updated nuclear recovery plan guidance in 2009. This guidance provides direction on recovery from events involving a radiological release from a civil or defense nuclear reactor, as well as the malicious use of radiological or nuclear materials. Among other things, it requires that all high-risk cities in the United Kingdom prepare recovery plans for such incidents. In addition to these initiatives, in 2005, the United Kingdom established a special Government Decontamination Service. This organization was created out of recognition that it would not be cost-effective for each entity—national, regional, and local government—to maintain the level of expertise needed for cleaning up chemical, biological, radiological, and nuclear materials, given that such events are rare. Finally, according to United Kingdom officials, the 2006 polonium incident in London showed the value of recovery planning. In particular, through this incident United Kingdom officials gained an appreciation for the need to have an established cleanup plan, including a process for determining cleanup levels, sufficient laboratory capacity to analyze a large quantity of samples for radiation, and procedures for handling the radioactive waste. Furthermore, they found that implementing cleanup plans in the polonium poisoning incident and testing plans in the November 2008 recovery exercise have helped the United Kingdom to better prepare for a larger RDD or IND incident. Madam Chairwoman, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or aloisee@gao.gov. Individuals who made important contributions to this testimony were Ned Woodward (Assistant Director), Nancy Crothers, James Espinoza, Tracey King, Thomas Laetz, Tim Persons, Jay Smale, and Keo Vongvanith. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A terrorist's use of a radiological dispersal device (RDD) or improvised nuclear device (IND) to release radioactive materials into the environment could have devastating consequences. The timely cleanup of contaminated areas, however, could speed the restoration of normal operations, thus reducing the adverse consequences from an incident. This testimony examines (1) the extent to which federal agencies are planning to fulfill their responsibilities to assist cities and their states in cleaning up areas contaminated with radioactive materials from RDD and IND incidents; (2) what is known about the federal government's capability to effectively cleanup areas contaminated with radioactive materials from RDD and IND incidents, and (3) suggestions from government emergency management officials on ways to improve federal preparedness to provide assistance to recover from RDD and IND incidents. We also discuss recovery activities in the United Kingdom. This testimony is based on our ongoing review of recovery preparedness issues for which we examined applicable federal laws and guidance; interviewed officials from the Department of Homeland Security (DHS), Federal Emergency Management Agency (FEMA), Department of Energy (DOE), and Environmental Protection Agency (EPA); and surveyed emergency management officials from 13 large cities and their states, as well as FEMA and EPA regional office officials. DHS, through FEMA, is responsible for developing a comprehensive emergency management system to respond to and recover from natural disasters and terrorists attacks, including RDD and IND attacks. The response phase would involve evacuations and providing medical treatment to those who were injured; the recovery phase would include cleaning up the radioactive contamination from an attack in order to permit people to return to their homes and businesses. To date, much federal attention has been given to developing a response framework, with less attention to recovery. Our survey found that almost all cities and states would be so overwhelmed by an RDD or IND incident that they would rely on the federal government to conduct almost all analysis and cleanup activities that are essential first steps towards recovery. However, we found that the federal government has not sufficiently planned to undertake these activities. For example, FEMA has not issued a national disaster recovery strategy or plans for RDD and IND incidents as required by law. Existing federal guidance provides only limited direction for federal agencies to develop their own recovery plans and conduct exercises to test preparedness. Out of over 70 RDD and IND exercises conducted in the last 5 years, only three have included interagency recovery discussions following a response exercise. Although DOE and EPA have experience in the cleanup of small-scale radiation-contaminated areas, their lack of knowledge and capability to apply approaches to address the magnitude of an RDD or an IND incident could increase recovery costs and delay completion. According to anexpert at Idaho National Laboratory, experience has shown that not selecting the appropriate decontamination technologies can generate waste types that are more difficult to remove than the original material and can create more debris requiring disposal--leading to increased costs. Limitations in laboratory capacity to rapidly test thousands of material samples during cleanup, and uncertainty regarding where to dispose of radioactive debris could also slow the recovery process. At least two-thirds of the city, state, and federal respondents expressed concern about federal capability to provide the necessary analysis and cleanup actions to promote recovery after these incidents. Nearly all survey respondents had suggestions to improve federal recovery preparedness for RDD and IND incidents. For example, almost all the cities and states identified the need for a national disaster recovery strategy to address gaps and overlaps in federal guidance. All but three cities wanted additional guidance, for example, on monitoring radioactivity levels, cleanup standards, and management of radioactive waste. Most cities wanted more interaction with federal agencies and joint exercising to test recovery preparedness. Finally, our review of the United Kingdom's preparedness to recover from radiological terrorism showed that that country has already taken actions similar to those suggested by our survey respondents, such as issuing national recovery guidance, conducting a full-scale recovery exercise, and publishing a national handbook for radiation incidents.
NextGen is a modernization effort begun in 2004 by FAA to transform the nation’s ground-based ATC system into a system that uses satellite- based navigation and other advanced technology. This effort is a multiyear, incremental transformation that will introduce new technologies and leverage existing technologies to affect every part of the NAS. These new technologies will use an Internet Protocol (IP) based network to See figure 1 below for a graphic illustration of the different communicate.parts of the NAS, the flow of information among them, and their transition to an IP-based network. According to FAA, the shift to NextGen technologies will require FAA to replace its proprietary, relatively isolated ATC computer systems with information systems that interoperate and share data throughout FAA’s operations and those of its aviation partners. These combined aviation operations are known as the enterprise. These new systems, which will be described in detail later in this report, will use IP-networking technologies to communicate across the enterprise. This transformation involves acquiring, certifying, and operating a vast network of navigation, communications, and surveillance systems, including information systems in the cockpits of thousands of aircraft (avionics); it will also employ digital and Internet-based computer-networking technologies, exposing the air- traffic control (ATC) system to new cybersecurity risks. NextGen comprises many programs that are in various stages of acquisition and deployment in the NAS. FAA classifies six programs as its foundational NextGen programs: Surveillance and Broadcast Services Subsystem (SBSS), Collaborative Air Traffic Management (CATM), Common Support Services Weather (CSS-Wx), Data Communications (Data Comm), NAS Voice Switch (NVS), and System Wide Information Management (SWIM) (see fig. 2). For the six programs we examined, FAA relies on contractors to assist with or complete most of the broad information technology and risk management activities. NIST, OMB, and FISMA state that regardless of whether a security task was performed by a contractor or by a federal agency, the federal agency is ultimately responsible for ensuring system security. The AMS requires that FAA program officials monitor the contractors’ performance in implementing contractual requirements, including those related to security. The Office of the Chief Information Security Officer within the Office of Finance and Management oversees cybersecurity across the three main areas of FAA activity known as domains (i.e., NAS ATC operations, Mission Support, and Research and Development ). This office provides operational security services to the Mission Support and R&D domains through efforts across FAA, as well as the Cyber Security Management Center (CSMC). The CSMC provides system monitoring and vulnerability remediation for FAA’s standard information-technology systems that support the agency. Mission- support information systems, such as email, are separate from the NAS and R&D domain systems. The Air Traffic Organization (ATO), the operational arm of FAA, implements and oversees cybersecurity measures for ATC information systems through several of its offices. The ATO’s NAS Security Risk Executive (Risk Executive) located in Technical Operations has responsibility for cybersecurity on all NAS ATC systems, including continuous monitoring, threat response coordination, and policy. According to FAA, the Risk Executive works internally with FAA’s Security and Hazardous Materials Office and NextGen offices, and externally with Department of Homeland Security (DHS) and airline stakeholders to provide an understanding of FAA’s critical mission and how it relates to other critical infrastructures. Another office within ATO, the NAS Cyber Operations unit, is responsible for monitoring some NAS systems, network data flows, and cyber events to detect anomalous and unauthorized cyber activities in the NAS domain. ATO’s Program Management Office is responsible for developing and fulfilling cybersecurity and all other system requirements for NAS information systems, including NextGen systems, through the acquisitions process. The Office of NextGen develops and disseminates cybersecurity policy on NextGen’s system engineering and controls, develops the NAS Enterprise Architecture, which is the agency’s long-term strategic plan for NextGen that includes, among other things, the information systems security (ISS) plans, and is responsible for the overall implementation of FAA’s NextGen initiative. The Office of Security and Hazardous Material Safety performs internal forensics investigations on computers that CSMC identifies as involved in activity that may compromise cybersecurity. The Office of Safety certifies the safety of all aircraft and aircraft equipment, including the software components for the avionics systems that could affect the safe operation of an aircraft. The Federal Information Security Management Act of 2002 (FISMA) established a comprehensive framework to better ensure the effectiveness of security controls over information resources that support federal operations and assets. FISMA requires each agency to develop, document, and implement an agency-wide information-security program, using a risk-based approach to determine and address cybersecurity requirements for information system management. Such a program includes planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies. Federal cybersecurity guidelines, such as those published by NIST, strongly encourage agencies to implement information cybersecurity early in the process of developing information systems. In this manner, the cybersecurity requirements can change as needed and be integrated cost-effectively. NIST also provides a process for integrating information-security and risk-management activities into the system development process over the life of the system. Accordingly, NIST has developed a risk management framework of standards and guidelines for agencies to follow in developing information security programs. Relevant publications include the following: Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach provides a process that integrates information-security and risk-management activities into the system development’s life cycle including security categorization, security control selection and implementation, security control assessment, information system authorization, and security control monitoring of an information system. Security and Privacy Controls for Federal Information Systems and Organizationsfederal information systems and organizations, and a process for selecting controls to protect organizational operations, assets, individuals, other organizations, and the nation from a diverse set of threats including hostile cyber attacks, natural disasters, structural failures, and human errors. The guidance includes privacy controls to be used in conjunction with the specified security controls to achieve comprehensive security and privacy protection. provides a catalog of security and privacy controls for Security Considerations in the System Development Life Cyclepresents a framework for incorporating security across the life cycle of a system and describes a minimum set of security steps needed to effectively incorporate security into a system during its development. It is intended to help agencies select and acquire cost-effective security controls by explaining how to include information-system security requirements in the system development’s life cycle. In addition to these NIST publications, the Office of Management and Budget’s (OMB) Security of Federal Automated Information Resources establishes a minimum set of controls to be included in federal automated information-security programs; assigns federal agency responsibilities for the security of automated information; and links the agency’s automated information-security programs and the agency’s management control systems. FAA’s Acquisition Management System (AMS) provides policies and guidance for managing all of its acquisitions. The AMS serves as the framework for IT project management and risk evaluation to help ensure that systems are developed and maintained on time and within budget, and that they deliver the capabilities necessary to meet user requirements including the development and integration of cybersecurity controls. FAA faces cybersecurity challenges in at least three areas: (1) protecting its air traffic control (ATC) information systems, (2) securing aircraft avionics used to operate and guide aircraft, and (3) clarifying cybersecurity roles and responsibilities among multiple FAA offices. FAA has taken several steps to address these challenges, but cybersecurity experts suggested additional actions FAA could take to enhance cybersecurity. New networking technologies connecting FAA’s ATC information systems expose these systems to new cybersecurity risks, potentially increasing opportunities for systems to be compromised and damaged. Such damage could stem both from attackers seeking to gain access to and move among information systems, and from trusted users of the systems, such as controllers or pilots, who might inadvertently cause harm. FAA’s ATC-related information systems are currently a mixture of old, legacy systems and new, IP-networked systems. FAA’s legacy systems consist mainly of decades-old, point-to-point, hardwired information systems, such as controller voice-switching systems, that share information only within their limited, wired configuration. In contrast, FAA plans for NextGen call for the new information systems to be networked together with IP technology into an overarching system of interoperating subsystems. According to FAA officials and experts we consulted, the ease of access to these different types of systems, and the potential to damage them, varies. The older systems, depicted on the left in figure 3 below, are difficult to access remotely because few of them connect from FAA to external entities such as through the Internet. They also have limited lines of direct connection within FAA. Conversely, the new information systems for NextGen programs are designed to interoperate with other systems and use IP networking to communicate within FAA, as shown on the right in figure 3 below. According to experts, if one system connected to an IP network is compromised, damage can potentially spread to other systems on the network, continually expanding the parts of the system at risk. As shown in the figure, cybersecurity controls, if properly designed and effectively implemented, can make IP-networked systems more resilient against damage while allowing the systems to interoperate. According to MITRE, because the older systems had limited connectivity, they were generally not protected with cybersecurity controls. Once one of them is breached, it is easy to potentially damage that system, gain access to other systems with which it communicates, and potentially damage those systems as well. According to FAA, so far, approximately 36 percent of the ATC systems in the national airspace system (NAS) are connected using IP, and FAA officials expect the percentage of NAS systems using IP networking to grow to 50 to 60 percent by 2020. According to MITRE and other experts, a hybrid system comprising both IP-connected and point-to-point subsystems increases the potential for the point-to-point systems to be compromised because of the increased connectivity to the system as a whole provided by the IP-connected systems. We reported in January 2015 that FAA has taken steps to protect its ATC systems from cyber-based threats. However, we stated that significant security-control weaknesses remain that threaten the agency’s ability to ensure the safe and uninterrupted operation of the national airspace system. We made numerous recommendations to address these weaknesses, and FAA has concurred with these recommendations. GAO, Information Security: FAA Needs to Address Weaknesses in Air Traffic Control Systems. GAO-15-221. (Washington DC: January 2015). FAA is developing an approach, called an enterprise approach, to connect and protect its information systems enterprise-wide. The enterprise approach views IP-networked systems as subsystems within the larger enterprise-wide system. Under this approach, the subsystems can interoperate while an enterprise-wide set of shared cybersecurity controls, called “common controls,” and a monitoring program protect and increase the resiliency of the subsystems. According to FAA officials and cybersecurity experts we spoke to, using common controls in an enterprise approach increases the efficiency of cybersecurity efforts. For example, NIST recommends the use of common controls because when new threats to the system are discovered and those threats can be addressed by revisions to common controls, agencies can then immediately protect all the interoperating subsystems by revising just the common control. For isolated, legacy systems, cybersecurity control revisions have to be developed and implemented uniquely for each individual system. FAA officials said that they apply both common controls and individual system controls, where appropriate, to IP- connected systems interoperating within an enterprise domain, in accordance with NIST guidance and OMB policy. Twelve of our 15 cybersecurity experts discussed enterprise-level holistic threat modeling, and all 12 agreed that FAA should develop such a model to strengthen cybersecurity agency-wide. NIST and the 12 experts we consulted said that threat modeling, a cybersecurity best practice, enables an organization to identify known threats, including insider threats, across its organization and align its cybersecurity efforts and limited resources accordingly to protect its mission. NIST guidance also states that an integrated, agency-wide view for managing risk can address the complex relationship among missions, the business processes needed to carry out missions, and the information systems supporting those missions and processes. NIST also recommends organization-wide threat modeling,because an agency-wide threat model would help to identify all known threats to information systems, allowing an agency to further identify vulnerabilities in those systems. FAA officials said that FAA has not produced a plan to develop an enterprise-wide threat model but has made some initial steps toward developing such a model. Specifically, FAA officials said that they have examined threats to the future NextGen air-transportation system and are currently working to develop multiple threat models. Such efforts include reviewing the resiliency of the ATC system in conjunction with the Department of Homeland Security (DHS). NIST recommended such a review in its guidelines to promote the protection of critical infrastructure. According to FAA, it also assesses risks associated with individual systems when it acquires them and during system reauthorization. According to FAA, these assessments examine how the system in question interoperates with other systems; however, FAA officials agree that these assessments do not constitute a holistic threat model that might give FAA an agency-wide view of known threats to the entire ATC system. One FAA official said and an aviation advisory group published a report stating that such a threat model would allow FAA to approach cybersecurity in a proactive way, whereas FAA’s current activities are reactive. For example, a threat model like that recommended by NIST and our experts could help FAA be more proactive in dealing with the rise of insider threats in federal agencies. FAA officials told us that they have not yet reached a point where they are prepared to pursue a comprehensive enterprise-wide threat model. Some experts told us that developing and maintaining a threat model would be costly and time consuming. FAA officials told us that they have not determined the funding or time that would be needed to develop such a model or identified the resources needed to implement it. One senior FAA official agreed with the experts’ view that an enterprise holistic-wide threat model is expensive and time-consuming to accomplish and maintain; he said that no plan currently exists to produce one for this reason. While developing a holistic threat model could be costly and time- consuming, in a constrained-resource environment such as FAA’s, the information contained in such a model could allow FAA to target resources to parts of the system commensurate with the likelihood of compromise and the danger associated with the potential consequences that might occur. Without a holistic threat model, it is unclear how FAA will be able to develop a more comprehensive picture of threats to its systems, and how they might compromise these systems. Most of our experts said that without this knowledge, FAA might not target its cybersecurity resources and analyses appropriately, leaving some important risks unmitigated while overprotecting against less severe risks. Ten of the cybersecurity experts we contacted also said that a holistic continuous-monitoring program is necessary for the IP-networked agency-wide approach that FAA is taking to accommodate NextGen programs. Cybersecurity experts and FAA officials told us that a holistic, continuous-monitoring program includes (1) real-time monitoring of the enterprise system’s boundaries, (2) detection of would-be attackers probing for vulnerabilities, (3) real-time monitoring and investigation of internal user activity that is outside expectations, and (4) other continuous-monitoring activities such as incident detection, response, and recovery activities and mitigations. FAA officials said they have implemented some monitoring activities for ATC systems. Although no coordinated policy exists for FAA enterprise- wide continuous monitoring, the Cyber Security Steering Committee has developed a plan that will incorporate DHS’s Continuous Diagnostics and Mitigation programOperations (NCO) group, which has responsibility for incident response for NAS ATC systems, daily analyzes ATC’s system activity reports, which, among other things, report on cyber attacks. Currently 9 of 39 IP- in the future. For example, the NAS Cyber connected ATC systems provide system activity reports for NCO’s review. NCO does not currently analyze activity reports for the other 30 systems. We have previously found that this limited monitoring ability increased the risk that a cybersecurity event affecting NAS systems could go undetected and recommended that FAA provide the NCO function with sufficient access to provide more comprehensive monitoring. FAA officials said that ATO plans to have all NAS’s IP-connected systems reporting daily to NCO within 3 years. According to FAA and experts we interviewed, modern communications technologies, including IP connectivity, are increasingly used in aircraft systems, creating the possibility that unauthorized individuals might access and compromise aircraft avionics systems. Aircraft information systems consist of avionics systems used for flight and in-flight entertainment (see fig. 4 below). Historically, aircraft in flight and their avionics systems used for flight guidance and control functioned as isolated and self-contained units, which protected their avionics systems from remote attack. However, according to FAA and experts we spoke to, IP networking may allow an attacker to gain remote access to avionics systems and compromise them—as shown in figure 4 (below). Firewalls protect avionics systems located in the cockpit from intrusion by cabin- system users, such as passengers who use in-flight entertainment services onboard. Four cybersecurity experts with whom we spoke discussed firewall vulnerabilities, and all four said that because firewalls are software components, they could be hacked like any other software and circumvented. The experts said that if the cabin systems connect to the cockpit avionics systems (e.g., share the same physical wiring harness or router) and use the same networking platform, in this case IP, a user could subvert the firewall and access the cockpit avionics system from the cabin. An FAA official said that additional security controls implemented onboard could strengthen the system. FAA officials and experts we interviewed said that modern aircraft are also increasingly connected to the Internet, which also uses IP- networking technology and can potentially provide an attacker with remote access to aircraft information systems. According to cybersecurity experts we interviewed, Internet connectivity in the cabin should be considered a direct link between the aircraft and the outside world, which includes potential malicious actors. FAA officials and cybersecurity and aviation experts we spoke to said that increasingly passengers in the cabin can access the Internet via onboard wireless broadband systems. One cybersecurity expert noted that a virus or malware planted in websites visited by passengers could provide an opportunity for a malicious attacker to access the IP-connected onboard information system through their infected machines. According to five cybersecurity experts, the threat of malicious activity by trusted insiders also grows with the ease of access to avionics systems afforded by IP connectivity if proper controls, such as role-based access, are not in place. For example, the presence of personal smart phones and tablets in the cockpit increases the risk of a system’s being compromised by trusted insiders, both malicious and non-malicious, if these devices have the capability to transmit information to aircraft avionics systems. FAA’s Office of Safety (AVS) is responsible for certifying the airworthiness of new aircraft and aviation equipment, including software components for avionics systems. Although FAA’s aircraft-airworthiness certification does not currently include assurance that cybersecurity is addressed, FAA currently issues rules with limited scope, called Special Conditions, to aircraft manufacturers when aircraft employ new technologies where IP interconnectivity could present cybersecurity risks. FAA views Special Conditions as an integral part of the certification process, which gives the manufacturer approval to design and manufacture the aircraft, engine, or propeller with additional capabilities not referred to in FAA regulations. For example, FAA issued Special Conditions to address the increased connectivity among aircraft cockpit and cabin systems for the Boeing 787 and Airbus A350 to provide systems cybersecurity and computer network protection from unauthorized external and internal access. FAA officials said that research supporting cybersecurity-related Special Conditions could be aggregated and used to support portions of a new rule, and industry experts we spoke with said they would support the certainty rulemaking would bring. According to FAA officials and the Radio Technical Commission for Aeronautics (RTCA), FAA has not yet developed new regulations to certify cybersecurity assurance for avionics systems because historically, aircraft avionics systems were isolated within the aircraft itself and not considered vulnerable to cybersecurity attacks. According to RTCA, FAA’s certification process for component airworthiness focuses on design assurance, which evaluates the probability and consequences of component failure. Further, RTCA reports that a focus on cybersecurity assurance would evaluate the likelihood and consequences of cybersecurity failure. The likelihood of an attack takes into account different levels of trustworthiness of entities with access to a component and the relative intention to do harm. However, FAA officials and an aviation expert said that intention has not been considered a factor in avionics component-system failures because other security processes generally prevented untrusted entities from gaining access to avionics components. FAA officials said that the agency recognizes that cybersecurity is an increasingly important issue in the aircraft-operating environment and is shifting the certification focus to address this potential new threat. FAA’s Office of Safety began developing a larger airworthiness rule covering avionics cybersecurity in 2013 but determined more research was necessary before rulemaking could begin and halted the process. In December 2014, FAA tasked its Aviation Rulemaking Advisory Committee (ARAC) with submitting a report within 14 months of the March 2015 kickoff meeting that provides recommendations on rulemaking and policy, and guidance on best practices for information security protection for aircraft, including both certification of avionics software and hardware, and continued airworthiness. FAA states that without updates to regulations, policy, and guidance to address aircraft system information security/protection (ASISP), aircraft vulnerabilities may not be identified and mitigated in a timely manner, thus increasing exposure times to security threats. According to the ARAC task assignment, the report should provide recommendations by early 2016 on whether ASISP-related rulemaking, policy, and/or guidance on leading practices are needed, and the rationale behind such recommendations. If policy or guidance, or both, are needed, among other things, the report should specify which aircraft and airworthiness standards would be affected. Cybersecurity roles and responsibilities are spread across FAA among different offices with varying missions and functions related to cybersecurity. FAA is taking steps to align agency cybersecurity orders and policies, as well as IT infrastructure and governance, with the changing needs of the NextGen cyber environment. In November 2013, the Chief Information Officer (CIO) and Chief Information Security Officer (CISO) under the FAA’s reorganized IT office began reorganizing and rewriting cybersecurity-related policies and plans agency-wide, and restructuring the agency’s IT infrastructure and governance, in part to address the shifts in cybersecurity activities and roles due to ATC modernization. According to FAA, a working group expects to complete a draft by September 2015 that reflects the restructuring of IT infrastructure. The FAA’s CIO is developing an enterprise approach for non-NAS information systems and cybersecurity, and is also leading a cross-agency team in developing the Cyber Security Strategy for 2016– 2020. Separately, the ATO is also developing and maintaining an enterprise approach for NAS systems in the ATC domain. FAA has also taken steps to better coordinate its cybersecurity efforts. FAA runs exercises that simulate cyber attacks and are designed to increase internal collaboration and help clarify roles during such events. Specifically, the NAS Security Risk Executive and other ATO staff organized and conducted five of these exercises between 2013 and 2015 involving FAA cybersecurity staff from different FAA offices as well as staff from the departments of Defense and Homeland Security, and MITRE. FAA officials said that these exercises are an integral part of sustaining and improving operational activities and are incorporated into the planning process for all NAS activities. FAA plans to continue conducting one or two per year. In addition to the ATO’s NAS Risk Executive, FAA established the Cybersecurity Steering Committee in November 2013 to better coordinate FAA agency-wide cybersecurity efforts at the executive level and provide an integrated approach to cybersecurity strategy and planning with a mission focus for FAA. The Committee has begun establishing the specific roles and responsibilities required to fulfill its mission. It is chaired by the CISO and includes representatives from ATO, NextGen, and Security and Hazardous Material Safety. These members are tasked with working together to identify, prioritize, strategize, and operationalize cybersecurity requirements, issues, programs, and projects needed to integrate an agency-wide approach to cybersecurity. Given that the Committee is in its early phases of operation, it is too early to tell whether it will be able to provide the cybersecurity visibility and coordination functions as outlined by the committee charter. While FAA is working to transform the organization of its cybersecurity efforts, the experts we consulted said that it could improve upon those efforts by including all key stakeholders in its agency-wide approach. All 15 of our cybersecurity and aviation experts agreed that organizational clarity regarding roles, responsibilities, and accountability is key to ensuring cybersecurity across the organization. In addition, the five experts who commented on stakeholder inclusion all said that because aircraft avionics systems have the potential to be connected to systems outside the aircraft, aircraft cybersecurity issues should be included in an agency-wide cybersecurity effort. For instance, AVS issues cybersecurity- related rules for aircraft and has begun reviewing rulemaking on cybersecurity, but AVS is not included in developing the agency-wide approach for information systems security and has no representative on the Cybersecurity Steering Committee. FAA states that AVS subject matter experts can be called upon to share information and recommendations but that regulatory aspects associated with cybersecurity for AVS’s information systems are addressed by the FAA’s CIO and are therefore not under the purview of the FAA Cybersecurity Steering Committee. While AVS has not directly requested to be on this committee, we previously found that it is important to ensure that relevant participants be included in collaborative efforts.could result in omitting an FAA stakeholder that has an understanding of specific technological changes in aircraft traversing the NAS environment and how these changes might intersect with changing ATC technologies and cybersecurity needs. According to NIST, one goal of an agency-wide approach to cybersecurity is protecting new information systems from threats by ensuring that when those systems are acquired, they incorporate security controls. To accomplish this goal, FAA’s Acquisition Management System (AMS) includes the six major information-technology and security-risk- management activities described in key NIST guidance. While FAA has integrated these six activities into the AMS lifecycle, our analysis of two NextGen foundational programs, SBSS and Data Comm, revealed instances in which some of these activities were not completed properly, or were completed in an untimely manner. In addition, while Data Comm managers have thus far provided oversight of their contractors’ security- related acquisition activities, SBSS managers did not possess some of the detailed information that would have enhanced their oversight prior to the system’s deployment. To its credit, FAA has integrated NIST’s six broad information-security and risk-management activities into its AMS, which guides the life cycle processes to be followed in developing FAA information systems. These activities include assessing the security controls, authorizing the system to operate based on the results of security categorizing the system’s impact level,selecting security controls, implementing the security controls, assessments and a determination of risk, and monitoring the efficacy of the security controls on an ongoing basis following a system’s deployment. These activities and their relationship to FAA’s AMS life cycle are shown in figure 5 below. System categorization: NIST guidance states that in applying the risk management framework to federal information systems’ design and development processes, agencies should first categorize each information system’s impact level (i.e., the severity of the consequences to the agency’s mission if a system were compromised). In accordance with this guidance and other federal agency requirements, FAA’s acquisition process requires that each new system’s security impact level be categorized as low, moderate, or high based upon the risks associated with the system and the information it processes, stores, or transmits.Of the six foundational NextGen systems we reviewed, all have completed at least an initial categorization process. Select security controls: NIST guidance states that agencies should next select protective measures, known as security controls, based on the characterization described above. According to NIST guidance and federal agency requirements, the impact categorization determines which security control baseline (i.e., starting point for consideration) the system should use, as the low-impact baseline lists fewer controls than the moderate- or high-impact baselines. NIST guidance also states that as part of the selection phase, organizations should tailor the baseline security controls so that they align with the system’s specific mission, function, or environment. In some cases, this aligning may include eliminating some inapplicable controls or applying supplemental controls. In accordance with NIST guidance, FAA’s acquisition policies require the selection of appropriate security controls that reflect the system’s categorization, and allow for appropriate tailoring of security controls. For example, detailed tailoring directions are provided in an FAA handbook that supplements the AMS. In addition, FAA recently drafted guidance to require that programs report, among other things, the cybersecurity decisions and activities conducted in the selection of security controls. Implement security controls: NIST guidance also states that once selected, the system must implement controls specified in the security plan. The guidance emphasizes that implementation helps protect systems against possible compromise. When NIST changes its guidance and introduces new security controls, OMB calls for deployed systems to implement the controls within one year of the change, and for systems under development to comply with NIST publications upon their eventual deployment. The handbook that supplements FAA’s AMS states that selected and tailored security controls should be implemented; however, according to FAA officials, FAA does not have a policy regarding how quickly to implement new NIST controls, and one official stated that the OMB’s direction is “not realistic” given current constraints. The official noted that while the agency recognizes that its implementation cycle for critical cybersecurity controls needs to be more agile and responsive, swift implementation is hampered by federal-funding processes, acquisition requirements, and, as discussed below, the need to extensively test security controls. The official noted that FAA is considering adapting acquisition practices in order to rapidly implement critical controls; however, no definitive plan has been established. Assess security controls: Additionally, NIST guidance states that assessments are important to ensuring that the security controls are functioning as intended. If a weakness is discovered during the assessment process, agencies are expected to generate a remediation plan to address the identified weakness. OMB directs agencies to develop plans of action and milestones (POA&Ms), which are intended to help agencies act upon assessment findings. Similarly, FAA’s acquisition policies state that security controls should be assessed to ensure that they provide the necessary security protection for each acquired system. The FAA handbook that supplements the AMS provides detailed guidance on managing POA&Ms in the event that the assessments discover weaknesses. Authorize system to operate based on risk: In addition, FAA’s AMS states that systems must obtain security authorization approval prior to receiving authorization to operate, which reflects NIST guidance that authorization to deploy a system should only be granted after considering the risks. NIST guidance states the authorizing officials should consider the results of assessments, including POA&Ms, in their decisions. Similarly, the FAA acquisition process requires that the authorizing official receive POA&Ms to assist them in deciding if the system can be deployed. Moreover, the AMS requires that systems be reauthorized at least every 3 years, and the decision regarding whether or not the security risks are acceptable must be reconsidered at that time. According to both NIST and FAA policy, reauthorization may take place more frequently than every 3 years if significant changes occur to the information system environment. Monitor security controls on ongoing basis: Last, NIST guidance states that agencies should monitor the security controls on an ongoing basis after deployment, including assessing controls’ effectiveness and reporting on the security state of the system. FAA’s AMS states that the security controls must be monitored after the system is deployed to ensure that they operate as expected and provide the necessary protection. Examples of FAA’s continuous monitoring activities include periodic scans of operational systems, patching vulnerabilities, and updating the system’s security plan. FAA’s acquisition policies also require that each system assess a subset of its controls every year. Core security controls, which have greater volatility or importance to the organization, are to be assessed every year. NextGen programs are completed in stages, sometimes referred to as builds. As a result, even when one build of a system is operational, the program is not necessarily complete. New features and capabilities can be added to the system over time as future builds move through the acquisition process. Moreover, some steps can be completed more than once. For example, multiple assessments can take place. repeated to determine whether updates are required. For example, security controls for SBSS have been re-assessed by FAA’s independent risk assessment team, which conducts testing, demonstrations, file reviews, and interviews with relevant personnel before and after a system becomes operational. Many of the six broad risk-management activities described in the AMS and NIST guidance involve security controls. These detailed protective measures—which include topics like access control, contingency planning, and physical security measures—are critical to ensuring that systems are sufficiently protected. Among other things, NIST guidance states that agencies should select security controls and assess the efficacy of security controls. In addition, NIST and OMB expect agencies to address weaknesses found during assessments. We analyzed two NextGen programs’ treatment of security controls and remediation activities: (1) SBSS, which is operating in some parts of the NAS, including over the Gulf of Mexico, and (2) Data Comm, which has not yet finished the acquisition process but has deployed a test system. We selected these because of their importance to NextGen, cost, and deployment status. Although FAA adhered to aspects of federal guidance on control selection, assessment, and weakness remediation, its implementation of these risk management activities could be or could have been improved. NIST provides the specific security protections, known as security controls that an organization should consider to help protect an information system. For a “moderate impact” system, like the majority of the foundational NextGen systems that have completed the categorization process, NIST lists more than 200 such controls as a baseline. However, NIST acknowledges that agencies should tailor security controls so that they are relevant and appropriate for their individual systems. The process of tailoring controls can include electing to rely on common controls rather than selecting a comparable NIST control for implementation,are not applicable for a particular system. According to NIST guidance, these decisions must be justified and appropriately documented, such as in a system security plan. or deciding that controls identified by NIST When SBSS was developed, FAA and its contractors selected controls from NIST guidance. For example, they selected controls such as an audit record of login attempts and automated mechanisms to alert security personnel to malicious activity. As allowed by the NIST and FAA guidance, SBSS determined that many controls were not applicable or were already covered by existing common controls, such as policies and procedures related to FAA security management activities. SBSS’s initial system-security plan accounted for the majority of moderate baseline controls recommended at the time. However, it did not sufficiently document the implementation details for some controls, including contingency planning and incident response controls. For example, the initial system security plan described the existing process that FAA used to detect and respond to incidents affecting NAS systems. However, it did not describe system-level requirements or procedures for incident handling for SBSS. A few of these controls were associated with weaknesses identified during the assessment process, indicating that these controls should have received more consideration during the selection process. Better documentation in the system security plan may have supported such consideration. While FAA’s system-security plan template from fiscal year 2009 provided guidance on documenting security controls, the fiscal year 2015 system-security plan template has since been updated based on NIST guidance and provides substantially more detailed instruction than in the past. In addition, the 2008 SBSS system security plan did not record decisions associated with more than three dozen enhancements that NIST provides to strengthen the controls and that are included in the security baseline. For example, while the system security plan accounted for permitted actions without identification or authentication, it did not document the enhancement that clarified that actions should be permitted to the extent necessary to accomplish mission objectives. This lack of documentation may have been due to limitations in FAA’s system security-plan template during that time period. While the template provided instruction that enhancements were to be documented, it did not specifically identify them in the same way that other controls were identified. SBSS continues to update the system security plan and security controls as part of the ongoing monitoring process, and the current system -security plan template covers enhancements. The Data Comm program is newer than SBSS and is not yet operational, As of and as such, its initial security control selection is still under way.October 2014, Data Comm had included approximately 60 percent of the more than 250 controls listed in the third version of the NIST 800-53 guidelines, some of which were identified as common controls. As for the slightly more than 100 controls that were identified as out of scope at this time, an FAA official explained that updates will be made as the program matures and that more security controls may be added in the future as deemed necessary. In accordance with NIST guidance, Data Comm has documented its justification for its current selection of NIST controls and its tailoring decisions to date in the system security plan. However, even though SBSS and Data Comm contractors justify control selection in the programs’ respective system-security plans, the contractors are not required to implement the most recent controls unless specifically tasked to do so by FAA. Currently, the SBSS contractor is only obligated to follow the first revision of NIST guidelines from 2006, although NIST has updated the guidelines three times since that time, most recently in 2013. Data Comm’s contractor is required to follow the third version of the guidelines, which was published in 2009, and updated in 2010. NIST updates its guidelines to reflect new and emerging threats, and issues new security controls to help agencies better protect their systems. According to NIST, the most recent update was motivated by the increasing sophistication of cyber attacks and the operations tempo of adversaries (i.e., the frequency of such attacks, the technical competence of the attackers, and the persistence of targeting by attackers). According to FAA, systems can incorporate new controls on an ad-hoc basis or by modifying systems’ contracts to reflect updated NIST guidance, and NIST’s most recent controls are reflected in FAA’s updated templates and guidance. However, FAA does not require that contracts be modified within a particular time frame to reflect NIST revisions. Although the SBSS program asks the contractor to implement more recent NIST controls on an ad-hoc basis, these actions are outside of the contract’s requirements and, according to program officials, must be paid for separately. While ad-hoc additions may be sufficient in some cases, SBSS has not yet implemented some of the controls that NIST recommended in its 2010 revision, but plans to address these controls in accordance with NIST’s 2013 update as these are part of a large update. SBSS officials explained that they did not previously have funding for an update of such a large scope, but they requested and received funding beginning in fiscal year 2015. According to program officials, these funds will allow them to adopt the missing controls. An FAA official stated that the SBSS program plans to adopt the most recent version of the NIST standards in fiscal year 2016. Given the pace of change in the threat environment, OMB is directing agencies that timely adoption of new NIST guidance, within a year, is critical to enhancing the protection of agencies’ information systems. As previously discussed, OMB requires that if NIST updates its security control guidance—which has occurred four times since the guidance was initially developed in 2005—deployed systems must implement all relevant updates within one year. Systems with weaknesses that could be exploited by these adversaries may be at increased risk if relevant controls in the new NIST guidelines are not implemented. With regard to Data Comm, an FAA official responsible for the program explained that the program security office had reviewed the changes in the most recent version of the NIST guidelines and that the official did not believe any security control changes that warranted a contract modification. Rather, the program will identify any security differences between the baseline and the latest NIST 800-53 revision 3 as part of the acquisition process and address them in the resulting POA&Ms, if required. However, the program office did not have an official analysis associated with this decision. NIST guidance recommends that agencies document the assumptions, constraints, and rationale supporting significant risk-management decisions in order to inform future decisions. Without documentation of its analysis, Data Comm’s future managers may not be able to react appropriately when the threat landscape changes to such a degree that a contract modification would be warranted. FAA did not sufficiently test certain security controls provided by the system’s contractor prior to SBSS’s deployment. As previously discussed, NIST guidance permits agencies to rely on controls provided by another party, such as a contractor; however, it instructs agencies to ensure that such controls are still sufficient and appropriate. As NIST explains and as GAO has previously found, the responsibility for mitigating risks arising from the use of contractor-provided systems and security controls lies with the agency. NIST instructs agencies to determine if security controls provided by external parties are sufficient to ensure protection. While NIST guidance provides some latitude in how agencies are to accomplish this task, the guidance makes clear that the steps must be sufficient to ensure the security of the system at hand. However, FAA’s pre- deployment testing of SBSS was insufficient. Specifically, according to the SBSS contractor, FAA used a briefing by the contractor to determine that the contractor’s processes for managing and controlling changes to SBSS were sufficient. However, the agency did not evaluate the processes to ensure that they were in place and operating effectively until October 2009, nearly a year after the system was initially deployed, when FAA identified significant weaknesses with the SBSS configuration controls implemented by the contractor. Shortcomings in these contractor-provided change-management security controls contributed to a significant SBSS system outage. Specifically, in August 2010, an engineer made an error while implementing a system change that caused the network to shut down, which prevented surveillance data transmitted through the hub from reaching FAA control centers. As a result, air traffic controllers could not use SBSS surveillance data to help separate aircraft in the affected locations for nearly 16 hours. A report produced by the SBSS contractor after the outage identified that the outage had occurred because of shortcomings in the processes and controls for managing and controlling changes to the system, and recommended steps to ensure that such a disruption would not occur again, including strengthening these controls. Although FAA’s testing had discovered weaknesses in a few of the controls less than a year before the outage, more robust testing of the controls prior to deployment may have indentified these issues earlier—possibly allowing for any identified to be corrected in time to potentially prevent or reduce the impact of the outage. However, these weaknesses had not yet been remedied when the outage occurred.has been thoroughly investigated to ensure that the SBSS program and the contractor learned from the experience, and that remedial actions were taken to strengthen the controls. Furthermore, a representative from the SBSS contractor noted that NextGen programs share information on an ad hoc basis to allow other systems to benefit from their experiences. FAA officials stated the outage Although Data Comm has not finished selecting its security controls, an FAA official who manages the program reported that the contractor is testing controls that have been selected thus far. In addition, Data Comm had identified more than 70 controls as of October 2014 that it classified as common controls. As previously discussed, common controls are managed by the agency, and accepting these controls is permitted by NIST guidance. According to FAA, Data Comm and other NextGen systems rely on the integrity of common controls so that they do not have to duplicate effort and spend funds needlessly. However, we recently reported that FAA did not test how some common controls protected the security of systems being added to the ATC environment. For example, FAA defined the security awareness training common control, but the testers did not examine training records to verify that personnel on the systems that rely on the control were taking the training. We recommended that FAA ensure that testing of security controls, including common controls, is comprehensive enough to determine whether these controls are operating effectively, and FAA concurred. According to NIST guidance and the AMS, agencies are expected to create plans of action and milestones (POA&M) when security weaknesses are detected during the testing of an information system. According to NIST and OMB, POA&Ms are a remediation plan with milestone dates for corrective actions that are needed to mitigate the identified weakness. In order for a POA&M to be closed, risk must be at an acceptable level. For example, the program might implement additional security controls, or further examination may show that the weakness is an acceptable risk or not actually applicable to the system. However, SBSS has not always remediated weaknesses identified in POA&Ms, which exposes the system to risk. According to FAA, SBSS was deployed in 2008 with weaknesses in the program’s intrusion detection system, a shortcoming that was still unresolved as of early 2015. An FAA official explained that remedial actions had not been implemented previously due to a lack of funding, but would be applied as part of an estimated $42 million update in fiscal year 2015. GAO, Information Security: FAA Needs to Address Weaknesses in Air Traffic Control Systems. GAO-15-221. (Washington DC: January 2015). infrequent to facilitate timely oversight and resolution of POA&Ms. In October 2013, FAA implemented a new POA&M database, known as the SMART Tool, that FAA officials say is intended to improve oversight, and could reduce delays in addressing POA&Ms. Although FAA policy does not identify a maximum amount of time that a POA&M can remain unresolved, delays in addressing security weaknesses extend the amount of time that systems are vulnerable to exploitation. FAA officials agreed that systems are more secure when POA&Ms are resolved in a timely fashion. In addition, until September 2014, Data Comm had not finished formally documenting the rationale as to why it did not plan to mitigate some of the weaknesses of a test system associated with the program. These weaknesses had been discovered in fiscal year 2013. Specifically, the Data Comm program is using a test system at two locations to obtain feedback from controllers, pilots, and other users. The test system generated 30 POA&Ms in fiscal year 2013, and FAA has since resolved them. However, FAA officials reported that they do not intend to address all of the POA&Ms because they will replace the test system in 2016 with new technology that reflects user feedback. All of the POA&Ms are categorized as “low risk,” and FAA officials explained that their analysis of the costs, risks, and benefits indicates that these POA&Ms are not worth addressing given the replacement schedule; however, this analysis was not initially noted in the POA&M records. As noted previously, NIST guidance states that documenting significant risk management decisions is imperative in order for officials to have the necessary information to make credible, risk-based decisions. We asked Data Comm officials about this concern in September 2014, and were told that Data Comm had taken action to remedy the situation during the course of our audit. Specifically, the POA&M records were updated to reflect the program’s decision process. According to FAA’s AMS, procurement should be an integrated part of the acquisition life-cycle management process, and contract administration should include monitoring contract deliverables. We found that FAA and the SBSS contractor communicated about key milestones during the acquisition process, and such communication contributes to the broad goal of contract monitoring. For example, the contractor performed the design-phase risk assessment (which detailed the methodology for control selection), presented that assessment to FAA, and received comments from FAA on the control selection process. However, FAA’s ability to monitor SBSS’s contract deliverables was reduced by limitations in the system’s work breakdown structure (WBS). A WBS deconstructs the program’s end product into successive levels with smaller elements until the work has been subdivided into a level suitable for management control. The lowest, most detailed level of the WBS is defined as the work package level. There were more than 50 work packages for SBSS, but our analysis found that the work packages for SBSS primarily covered management issues for certification and accreditation versus detailed security issues. Consequently, most of the work packages did not address design and development activities for specific, complex, technical-security requirement areas. Moreover, many of the work packages’ project implementation activities were not formally tracked or monitored. As a result of these issues, FAA could not effectively monitor the contractor’s cost, schedule, and technical problems associated with specific security requirements. The lack of specificity and oversight unnecessarily increased the risk that weaknesses could occur. SBSS’s contractors are also responsible for implementing security controls to address weaknesses, but we found that in at least one case, FAA did not exercise its oversight responsibility to provide the contractor with sufficiently timely feedback on the plans of action (i.e., POA&Ms) that detail which security controls should be adopted. Specifically, in 2013, the contractor provided FAA with cost and schedule assessments associated with 48 POA&Ms. However, despite attempts to solicit feedback, FAA did not provide the contractor with timely feedback on this proposal for 5 months, when FAA declined the proposal. Instead, FAA determined it would issue a new request for proposals based on more recent NIST guidance (rev. 4) to address these controls. As Data Comm is still under development, its security requirements and selected controls continue to evolve. Officials stated that they work closely with the contractor to ensure delivery against technical cost and schedule requirements. For the security controls selected thus far, FAA is able to trace the control to the associated security requirement, an ability that indicates that FAA is exercising oversight in this area. We also found that the Data Comm program also monitors system development and security through a variety of meetings, such as monthly Program Management reviews, quarterly Executive Committee meetings, bi- weekly Program Management Working Groups, and weekly Contracts meetings. While the AMS does not delineate specific meeting frequency or agenda requirements, the regularity and content from Data Comm’s meetings aligns with the AMS guidance to monitor the contract deliverables. Through its NextGen initiative, FAA is shifting the ATC system from a point-to-point communications system to an Internet-technology-based, interconnected system, a process of changeover that increases cybersecurity risks. FAA is making strides to address these risks, including implementing an enterprise approach for protecting its systems from cyber attack by both internal and external threats in accordance with NIST and other cybersecurity leading practices; however, FAA has not developed a holistic threat model that would describe the landscape of security risks to FAA’s information systems. Such a model would inform the ongoing implementation of FAA’s cybersecurity efforts to protect the National Airspace System. Development of a threat model could require significant resources and time, however, and FAA would first need to assess the costs and time frames involved in such an effort. FAA has also recognized that extensive changes to its information-security procedures and some realignment of information security functions within its organization are required to implement a secure, interconnected IP-based ATC system, and has taken a number of steps in this direction. However, the experts we consulted were concerned that FAA’s plans for organizational realignment have not adequately considered the role of the Office of Safety, which is responsible for certifying the avionics systems aboard aircraft, including cybersecurity of those systems that enable communication with air traffic control and that guide aircraft. FAA’s acquisition management system is evolving to stay up-to-date on federal cybersecurity guidance as FAA designs and develops NextGen systems; and FAA has made significant strides in incorporating requirements for security controls recommended by NIST guidelines into its acquisition of these systems. While FAA generally followed many of the NIST guidelines for establishing security controls in the two key NextGen acquisitions we examined, we found instances where FAA lacked assurance that security weaknesses were properly addressed. For SBSS, FAA did not ensure that weaknesses identified during security reviews were adequately tracked and in some cases were not resolved on a timely basis. As a result, FAA lacked assurance that weaknesses that could compromise system security were addressed, exposing systems to potential compromise. FAA has taken steps to ensure future incidents do not occur, such as creating a more robust remediation system for tracking weaknesses. Also, for both systems, FAA has not yet adopted, as directed by OMB, the latest security controls recommended by NIST guidelines, which reflect updates to deal with the evolving cybersecurity threat to information. Although FAA anticipates that SBSS will adopt these controls in fiscal year 2016, the program has yet to provide the funding to the contractor to implement the controls. Delays in adopting the latest standards extend the amount of time that system security requirements may not adequately mitigate system exposure to the newest threats. To better ensure that cybersecurity threats to NextGen systems are addressed, the Secretary of Transportation should instruct the FAA Administrator to take the following three actions. As a first step to developing an agency-wide threat model, assess the potential cost and timetable for developing such a threat model and the resources required to maintain it. Incorporate the Office of Safety into FAA’s agency-wide approach by including it on the Cybersecurity Steering Committee. Given the challenges FAA faces in meeting OMB’s guidance to implement the latest security controls in NIST’s revised guidelines within one year of issuance, develop a plan to fund and implement the NIST revisions within OMB’s time frames. We provided a draft of this report to the Department of Transportation for review and comment. The Department provided written comments, which are reprinted in appendix II. The Department concurred with two of our three recommendations. Specifically, FAA concurred with the recommendation that it assess the potential cost and timetable for developing an agency-wide threat model, and the recommendation that it develop a plan to fund and implement NIST revisions within OMB timeframes. With regard to the recommendation to incorporate the Office of Safety into FAA’s agency-wide approach by including it as a member on the Cybersecurity Steering Committee, the Department believes that FAA has already complied with the intent of the recommendation. According to the Department, FAA has transferred cybersecurity personnel from the Office of Safety to the Office of the Chief Information Officer, which manages cybersecurity for all aviation safety information systems. The Department also stated that FAA’s Chief Information Office works closely with the Office of Safety on certification standards for non-FAA information systems operating within the National Airspace System. We agree that these actions will help in the execution and coordination of cybersecurity activities involving the Office of Safety. However, we maintain that in addition to these actions, the Office of Safety should be a member of the Cybersecurity Steering Committee, which, as the department notes in its letter, was established to lead FAA’s efforts to develop a comprehensive cyber-risk management strategy, and to identify and correct both existing and evolving vulnerabilities in all Internet protocol-based systems. Because aircraft aviation systems are becoming increasingly connected to systems outside the aircraft, the Office of Safety, which is responsible for certifying aircraft systems, should be involved in agency-wide cybersecurity efforts, including cybersecurity planning and vulnerability identification, since such efforts may be crucial in conducting its certification activities. As we state in the report, not including the Office of Safety as a full member of the Committee could hinder FAA’s efforts to develop a coordinated, holistic, agency-wide approach to cybersecurity. This lack of involvement could result in omitting an FAA stakeholder that has an understanding of specific technological changes in aircraft traversing the NAS environment and how these changes might intersect with changing ATC technologies and cybersecurity needs. In its comments the Department stated that FAA is committed to strengthening its capabilities to defend against new and evolving cybersecurity threats. According to the Department, FAA is initiating a comprehensive program to improve the cybersecurity defenses of the NAS infrastructure, as well as other mission critical systems. The Department’s letter lists a number of actions FAA has taken to improve cybersecurity, many of which are described in this report. We applaud FAA’s commitment to strengthening cybersecurity in the NAS, and agree that the actions it has taken are important steps for FAA to take. We also believe that addressing our recommendations will result in valuable improvements to the information security of the NAS. We are sending copies of this report to the Department of Transportation and the appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me on (202) 512-2834 or at dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objectives of this report were to (1) identify the key challenges facing FAA as it shifts to the NextGen ATC system and how FAA is addressing those challenges and (2) assess the extent FAA and its contractors followed federal guidelines for incorporating cybersecurity requirements in its acquisition of NextGen programs. To ascertain challenges FAA faces with NextGen and how FAA has begun addressing these challenges, we obtained relevant security documents from FAA and detailed descriptions of FAA’s cybersecurity efforts from officials. We also selected a non-generalizable sample of 15 cybersecurity and aviation experts with varied experience—some of whom have knowledge of FAA’s internal cybersecurity activities, policies, and personnel. We then analyzed the information about FAA’s cybersecurity efforts, synthesized it, and produced a document that we provided to the experts for their review. FAA concurred that the document was accurate. We then interviewed the experts, collecting information on the cybersecurity challenges they think FAA faces and will face in the NextGen transition. Interviewees also commented, to the extent they were able, on the extent to which FAA’s cybersecurity activities and plans address the identified challenges. We analyzed and synthesized these responses, reporting on the numbers of experts who discussed particular topics as well as the numbers of experts who agreed or disagreed on particular messages. The experts from whom we obtained responses are listed in table 1. Separately, we also obtained the views of several aviation industry officials, including officials from the Airlines for America, Airports Council International—North America, Air Line Pilots Association, General Aviation Manufacturers Association, Garmin, MITRE Corporation,National Air Traffic Controller Association, and the Boeing Corporation. We also reviewed relevant reports issued by GAO, the Inspector General of the Department of Transportation, and the National Academies. To assess the extent to which FAA and its contractors, in the acquisition of NextGen programs, have followed federal guidelines for incorporating cybersecurity controls, we compared pertinent FAA policies, procedures, and practices with selected federal information security laws and federal guidance, including standards and guidelines from the National Institute of Standards and Technology (NIST). In particular, we compared FAA’s Acquisition Management System (AMS) against NIST’s risk management guidelines and information technology-security guidelines (800-37) and security considerations in software development life cycle (800-64) to determine if FAA’s acquisition policy follows federal cybersecurity guidelines for the six foundational NextGen programs: Surveillance and Broadcast Services (SBSS); Collaborative Air Traffic Management (CATM); Data Communications (Data Comm); NAS Voice Switch (NVS); Common Support Service-Weather (CSS-Wx); and System Wide Information Management (SWIM). The NextGen Foundational Programs consist of different segments, also called builds; parts; and subsystems. Some security activities take place at the program level, while others apply to specific components of the program. We analyzed FAA’s program documentation of key cybersecurity activities as described by NIST and interviewed system managers to determine if FAA completed the activities or has plans to complete the activities that were started but not fully completed. In addition, we chose two key NextGen acquisitions, SBSS and Data Comm, for an in-depth review because of their importance to NextGen, cost, and deployment status. SBSS has completed the acquisition cycle, while Data Comm will allow for insight into how the process has changed and what still might be an issue for upcoming programs. We assessed if FAA had established and implemented a disciplined life-cycle management approach integrated with information security by comparing FAA’s policies for system life-cycle management and cybersecurity to NIST guidance on security risk management system acquisition. We also compared documentation of project activities and plans to these requirements, and interviewed officials about FAA’s policies and FAA’s information security practices. We assessed how well FAA and contractors completed key cybersecurity activities and the extent to which they complied with AMS and NIST requirements relating to cybersecurity. We also compared documentation of project activities and plans to these requirements, and interviewed agency officials about FAA’s policies and information security practices. We also reviewed pertinent sections of prior GAO reports related to cybersecurity. We performed our work at FAA headquarters in Washington, D.C.; the Air Traffic Control Systems Command Center in Warrenton, Virginia; and at an FAA contractor location in Herndon, Virginia. We determined that information provided by the federal and nonfederal entities, such as the type of information contained within FAA’s security assessments and Plans of Action and Milestones, was sufficiently reliable for the purposes of our review. To arrive at this assessment, we corroborated the information by comparing the plans with statements from relevant agency officials. We conducted this performance audit from September 2013 through March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individual named above, Ed Laughlin, Assistant Director; Gary Austin, Assistant Director; Nick Marinos, Assistant Director; Jake Campbell; Bill Cook; Colin Fallon; Elke Kolodinski; Nick Nadarski; Josh Ormond; Krzysztof Pasternak; and Alison Snyder made key contributions to this report.
FAA is responsible for overseeing the national airspace system, which comprises ATC systems, procedures, facilities, and aircraft, and the people who operate them. FAA is implementing NextGen to move the current radar-based ATC system to one that is based on satellite navigation and automation. It is essential that FAA ensures effective information-security controls are incorporated in the design of NextGen programs to protect them from threats. GAO was asked to review FAA's cybersecurity efforts. This report (1) identifies the cybersecurity challenges facing FAA as it shifts to the NextGen ATC system and how FAA has begun addressing those challenges, and (2) assesses the extent to which FAA and its contractors, in the acquisition of NextGen programs, have followed federal guidelines for incorporating cybersecurity controls. GAO reviewed FAA cybersecurity policies and procedures and federal guidelines, and interviewed FAA officials, aviation industry stakeholders, and 15 select cybersecurity experts based on their work and recommendations by other experts. As the agency transitions to the Next Generation Air Transportation System (NextGen), the Federal Aviation Administration (FAA) faces cybersecurity challenges in at least three areas: (1) protecting air-traffic control (ATC) information systems, (2) protecting aircraft avionics used to operate and guide aircraft, and (3) clarifying cybersecurity roles and responsibilities among multiple FAA offices. As GAO reported in January 2015, FAA has taken steps to protect its ATC systems from cyber-based threats; however, significant security-control weaknesses remain that threaten the agency's ability to ensure the safe and uninterrupted operation of the national airspace system. FAA has agreed to address these weaknesses. Nevertheless, FAA will continue to be challenged in protecting ATC systems because it has not developed a cybersecurity threat model. NIST guidance, as well as experts GAO consulted, recommend such modeling to identify potential threats to information systems, and as a basis for aligning cybersecurity efforts and limited resources. While FAA has taken some steps toward developing such a model, it has no plans to produce one and has not assessed the funding or time that would be needed to do so. Without such a model, FAA may not be allocating resources properly to guard against the most significant cybersecurity threats. Modern aircraft are increasingly connected to the Internet. This interconnectedness can potentially provide unauthorized remote access to aircraft avionics systems. As part of the aircraft certification process, FAA's Office of Safety (AVS) currently certifies new interconnected systems through rules for specific aircraft and has started reviewing rules for certifying the cybersecurity of all new aircraft systems. FAA is making strides to address the challenge of clarifying cybersecurity roles and responsibilities among multiple FAA offices, such as creating a Cyber Security Steering Committee (the Committee) to oversee information security. However, AVS is not represented on the Committee but can be included on an ad-hoc advisory basis. Not including AVS as a full member could hinder FAA's efforts to develop a coordinated, holistic, agency-wide approach to cybersecurity. FAA's acquisition management process generally aligned with federal guidelines for incorporating requirements for cybersecurity controls in its acquisition of NextGen programs. For example, the process included the six major information-technology and risk-management activities as described by NIST. Timely implementation of some of these activities could have been improved based on their importance to NextGen, cost, and deployment status. The Surveillance and Broadcast Services Subsystem (SBSS)—which enables satellite guidance of aircraft and is currently deployed in parts of the nation—has not adopted all of the April 2013 changes to NIST security controls, such as intrusion detection improvements, although the Office of Management and Budget guidance states that deployed systems must adopt changes within one year. Systems with weaknesses that could be exploited by adversaries may be at increased risk if relevant controls are not implemented. GAO recommends that FAA: 1) assess developing a cybersecurity threat model, 2) include AVS as a full member of the Committee, and 3) develop a plan to implement NIST revisions within OMB's time frames. FAA concurred with recommendations one and three, but believes that AVS is sufficiently involved in cybersecurity. GAO maintains that AVS should be a member of the Committee.
The Civil Reserve Air Fleet (CRAF) is a voluntary, contract-based agreement between DOD and U.S. commercial air carriers that augments DOD’s military airlift capability during times of war and national emergency. It was created in 1951 to augment DOD airlift capability during a national defense-related crisis. The National Airlift Policy, signed by President Reagan in 1987 and still in effect, establishes policy that the military will rely on the commercial air carrier industry to provide the airlift capability required beyond that available in the military airlift fleet. The policy includes guidelines for meeting airlift requirements in both peacetime and wartime. These guidelines direct, among other things, that policies be designed to increase participation in CRAF and enhance the mobilization base of the U.S. commercial air carrier industry. In exchange for this participation, the government provides commercial carriers the opportunity to fly DOD peacetime missions moving passengers and cargo and also sets aside business for CRAF participants in the General Services Administration City Pairs passenger program and TRANSCOM’s Worldwide Express cargo program. CRAF is divided into three progressive stages that TRANSCOM can activate during times of crisis, in part or in whole, with the approval of the Secretary of Defense. Stage I covers a minor regional contingency or other situations where AMC cannot simultaneously meet both deployment and other airlift requirements. Stage II is tailored for a major theater war or a defense airlift emergency short of a full national emergency. Stage III would be required if the military had to fight more than one major theater war at the same time or operate in a larger crisis, including a national emergency declared by the President or Congress. A stage III CRAF activation has never occurred. DOD has activated CRAF only twice in the history of the program. Stage I and part of stage II were activated in support of Operations Desert Shield and Desert Storm in August 1990 and January 1991, respectively, through May 1991. The CRAF stage I passenger segment was activated in support of Operation Iraqi Freedom in February through June 2003. To enter the CRAF program, an air carrier must (1) be a U.S. flagged, Federal Aviation Administration approved Part 121 air carrier, (2) be approved by the Commercial Airlift Review Board, (3) have one year prior equivalent uninterrupted service to the commercial sector, (4) meet a minimum fleet participation level (for international carriers), (5) meet a specified utilization rate (for international and aeromedical evacuation fleet participants), and (6) be able to meet manning and crew requirements. Once approved to participate, carriers commit the number of aircraft they will make available for each of the three stages of the CRAF program. AMC then decides the number of aircraft that will be accepted into the CRAF program, based on DOD’s wartime requirements. As of April 2013, a total of 64 aircraft were committed to stage I, 308 to stage II, and 554 to stage III. Two segments of the commercial airlift industry—scheduled service carriers and charter carriers—comprise the CRAF wartime capability. The scheduled service carriers—which include large passenger airlines such as American Airlines and Delta Air Lines and cargo carriers such as FedEx and UPS—pledge the majority of the aircraft accepted into the CRAF program. DOD will use most of the pledged aircraft only during a CRAF activation. In peacetime, scheduled service carriers operate commercial flights on regular routes and cannot afford unplanned disruptions to their airline networks. Because many DOD missions are not routine in their locations or timing, charter carriers—which have the flexibility to provide airlift based on their customers’ schedules—transport the majority of DOD’s peacetime, contingency, and stage I business. For some of the charter carriers, this peacetime business accounts for a significant portion of their total business revenue. However, because scheduled service carriers have large fleets, they are also a critical component of CRAF, and they provide the bulk of the CRAF strategic reserve in the event of a CRAF activation. The primary incentive for commercial carriers to participate in the CRAF program is the opportunity to obtain DOD peacetime business. DOD distributes peacetime business to CRAF participants using an entitlement process. CRAF carriers are awarded points based on the number of aircraft they commit to the program, the stage to which these aircraft are assigned, and other considerations as applicable to the individual airline. The amount of peacetime business CRAF participants are entitled to is determined in advance of any missions awarded. DOD makes this business available to the CRAF carriers to fulfill its peacetime business obligation to them, and it does so by offering the carriers the opportunity to fly various missions (for a list of all CRAF carriers, see appendix III). TRANSCOM and AMC share responsibility with respect to CRAF policy. TRANSCOM validates the requirements for the movement of personnel and cargo, determines which transportation mode will be used for these movements, and distributes the work to the appropriate component command. Once TRANSCOM determines that a movement will go by air, the mission requirement is handled by AMC. Within AMC, the Tanker Airlift Control Center (TACC) normally handles mission planning, assignment of airlift assets, mission control, and tracking. Mission planning includes determining whether military or commercial aircraft will fly a mission. CRAF carriers generally have priority over non-CRAF carriers for movements of passengers and cargo. The Fly CRAF Act generally requires DOD to use CRAF carriers when contracting for airlift services, whenever the carriers are available. If no CRAF participant is available to supply the airlift, DOD may use a non- CRAF carrier (either U.S. or foreign flagged) to fly the mission. For airlift services between two locations outside the United States, CRAF carriers must be used as long as they are “reasonably available.” Only foreign carriers operate larger aircraft, such as the AN-124 and IL-76, which are designed to carry outsized and oversized cargo that U.S. commercial carriers normally cannot accommodate. However, according to TRANSCOM officials, DOD uses foreign carriers through subcontracts with CRAF participants, and only rarely contracts directly with foreign carriers. DOD interprets the Fly CRAF Act as applying only to contracts that are specifically for airlift services, and not to contracts for services or supplies that may involve airlift or other transportation services. For example, according to TRANSCOM, DOD does not require “The Fly CRAF preference” to be applied to service or supply contracts such as the Logistics Civil Augmentation Program or the Defense Logistics Agency Prime Vendor Program. According to DOD officials, the current law and related contracting provisions provide the department with the flexibility to acquire the best value for products and services when executing service or supply contracts. DOD has exceeded the flying hours needed to meet military training requirements for fiscal years 2002 through 2010 due to increased operational requirements associated with Afghanistan and Iraq; however it does not know whether it used CRAF participants to the maximum extent practicable during this period. In fiscal years 2010 through 2012, DOD’s flying hours have more closely matched its training plan. In keeping with its policy to both provide training within the military airlift system and use commercial sources of transportation to conduct eligible airlift missions, DOD has taken steps to provide CRAF participants with peacetime business. However, DOD does not use information from its process for monitoring flying hours to determine when it will use more hours than it has planned to meet training requirements and shift eligible airlift missions to CRAF participants to ensure that commercial sources are used to the maximum extent practicable, as required by DOD guidance. Unless DOD uses its information on flying hours to determine when it can shift eligible airlift missions to CRAF participants, it may be flying its military fleet unnecessarily. DOD officials say that using the military fleet to fly missions that are eligible to be shifted to CRAF participants is more expensive than using the CRAF carriers and could reduce these carriers’ level of participation in the CRAF program. The National Airlift Policy states that the “Department of Defense shall establish appropriate levels for peacetime cargo airlift augmentation in order to promote the effectiveness of the Civil Reserve Air Fleet and provide training within the military airlift system.” Consistent with that policy, DOD Instruction 4500.57 requires that DOD operate its fleet to meet its training requirements and also requires that it use commercial sources of transportation to the “maximum extent practicable.” DOD officials stated that they have been using military airlift beyond what was planned, because the operations in Afghanistan and Iraq created additional airlift requirements, many of which could not be met using U.S. commercial sources. For example, some kinds of cargo—such as the mine resistant ambush protected vehicles—are too large to fit inside the aircraft operated by CRAF participants. Military aircraft, along with some foreign aircraft such as the AN-124 and the IL-76, are able to accommodate these kinds of cargo. Additionally, missions in Afghanistan and Iraq often could not be flown by CRAF participants because of airspace restrictions on U.S. carriers operating in those countries. Finally, some missions have additional requirements that call for the use of military airlift, such as requirements that cargo be escorted by military personnel or that an aircraft land on an unpaved runway. Every year, DOD develops requirements for its military aircrews that serve as the basis for its flying hour program. The flying hour program provides training and experience for the aircrews. These requirements consist mainly of two types of flying hours—”currency hours” and “experiencing hours.” Training flights conducted to log currency hours generally do not carry cargo or passengers and therefore do not compete with commercially-operated missions. On the other hand, experiencing, or “time in the air,” flights typically carry cargo or passengers and compete with commercially-operated missions. Officials told us that currency hour flights account for roughly 20 percent of the flying hour requirement and are funded through operations and maintenance funds, while experiencing hour flights account for approximately 80 percent of the requirement. We excluded currency hours from our analysis, since flights that provide these training hours generally do not compete for cargo or passengers with commercially-operated missions. However, flights that provide experiencing hour training are funded through the Transportation Working Capital Fund, because these flights carry cargo or passengers. As a revolving fund account, the Transportation Working Capital Fund relies on customer reimbursements in exchange for transportation services provided. The customer that requests airlift reimburses the fund for the mission performed, although some costs associated with mobilization capability and readiness may be funded by the Air Force. For the purposes of this report, “military airlift training requirements” refers to experiencing hours, because those hours are the ones that DOD must decide how to allocate to meet military airlift training requirements while also using CRAF participants to the maximum extent practicable. Figure 1 shows the percentage by which AMC has exceeded the flying hours that it planned for experiencing requirements in fiscal years 2001 through 2012. DOD said that during these years it exceeded its flying hours for training because of the need to fly missions to support operations in Iraq and Afghanistan. To develop this chart, we compared AMC’s requirements for experiencing flying hours to the actual hours AMC flew with the primary airlift platforms—the C-5, C-17, and C-130— and expressed them as percentages of the planned flying hours. We excluded the tanker aircraft from this analysis, since there are no commercial aircraft in the CRAF program that are comparable to the KC- 10 or KC-135. Recognizing the importance of the commercial carriers for meeting its future airlift requirements, DOD has taken steps to increase the amount of peacetime business it gives to CRAF participants. According to TRANSCOM’s Fiscal Year 2012 Annual Report, CRAF carriers remain essential in supplying transportation services and provide a critical part of DOD’s warfighting deployment capability. Further, TRANSCOM and AMC are using CRAF carriers to more directly support the forces in Afghanistan. CRAF participants have provided the majority of passenger movements and about a quarter or more of all cargo movements since fiscal year 2004. Figures 2, 3, and 4 show the extent to which DOD has relied on CRAF participants to provide airlift services. Over the last few years, both the number of CRAF participants and the number of aircraft pledged to the CRAF program have fluctuated, and it is not clear what level of support CRAF participants will provide in the future. For example, as we noted in our 2009 report on CRAF, the number of charter aircraft enrolled in the CRAF program had declined from more than 60 aircraft in 2003 to as few as 19 in April 2008, before stabilizing at 29 charter aircraft in May 2008. Our analysis shows that CRAF participation as of fiscal year 2012 was still sufficient to allow DOD to meet its wartime requirements. However, according to some current CRAF participants, changes to the business environment, such as the ongoing economic down-turn, have resulted in five of the participating carriers filing for bankruptcy over the last three years. Two of these carriers have already completely ceased providing airlift services. Table 1 shows the level of airlift support provided by CRAF participants and military aircraft during the last three years. To support increasing the amount of business provided to CRAF participants, TRANSCOM has created a new organization called the Enterprise Readiness Center. According to an official with the Enterprise Readiness Center, one of the goals of the center is to explore ways to encourage DOD organizations, like the Defense Logistics Agency, to direct more air cargo business into the DOD-managed Defense Transportation System. Further, the center will also seek to preserve DOD’s airlift readiness capability, given the reduction in airlift volume, and to help DOD maintain efficiencies by ensuring that the Defense Transportation System is the primary source used by DOD entities to arrange transportation. To achieve this, the Enterprise Readiness Center proposes to improve the usage process of the Defense Transportation System, create flexible rates, minimize overhead as a way to reduce rates, develop customer-based transportation solutions, and create an active dialogue with CRAF participants. As a way to further strengthen communications and the strategic relationship between DOD, the Department of Transportation, and CRAF participants, TRANSCOM and AMC also established an Executive Working Group in 2010. The Executive Working Group is a discussion forum used to address general issues related to the CRAF program. The working group’s meetings are a forum for providing updates regarding efforts related to the CRAF program, such as updates on various studies, the status of efforts related to CRAF, and carrier concerns. DOD officials also told us that they have taken additional steps over the last few years to improve the distribution of business within the CRAF program. TRANSCOM has revised its process for awarding points over the last few years to award more bonus points to carriers that fly additional peacetime missions, assume a greater risk of activation, and operate more modern, fuel-efficient aircraft. TRANSCOM has also revised the practice of awarding commissions. Larger carriers allow the smaller carriers on their teams the benefit of using their points to obtain DOD business, in exchange for commissions consisting of a percentage of the revenue the smaller carriers earn from this business. These commissions are one of the ways in which larger carriers earn revenue from the CRAF program, since they do not conduct many of the actual airlift missions in peacetime. However, according to an official at one carrier, these commissions had risen to as high as 9 percent of the revenue earned from the mission. TRANSCOM officials told us that they have capped the value of these commissions at 5 percent of mission revenue, in an attempt to ensure that smaller carriers earn enough profit from performing peacetime airlift missions. DOD intended for these efforts to strengthen the viability of the CRAF program. The opinions of CRAF participants varied on the extent to which these changes made the program more equitable, mostly depending on whether the carrier directly benefited from the changes. All of the carriers we spoke with indicated that they were planning to stay in the CRAF program for the immediate future. However, some added that if the revenue they were receiving decreased too much, they would reassess their participation and would consider not participating in future years. More than half of the CRAF participants we interviewed suggested that DOD could do more to increase the peacetime business it provides to them. Some of these carriers suggested that DOD’s use of foreign air carriers should be curtailed. According to DOD officials, foreign carriers primarily operate as sub-contractors to CRAF participants to move cargo that is too large for standard U.S. commercial aircraft, and only in rare cases would DOD contract directly with a foreign carrier. Furthermore, our analysis indicates that the use of foreign carriers has declined since its high point in fiscal year 2008. As shown in figure 5, payments made to foreign carriers have declined by more than 55 percent since fiscal year 2008. DOD does not use its process for monitoring flying hours to determine when it will exceed its planned training hours, and it does not use the information from this process to allocate eligible airlift missions to CRAF participants. As previously noted, DOD guidance requires TRANSCOM to meet its training needs while also using commercial sources of transportation to the “maximum extent practicable.” DOD officials told us that, consistent with this policy, meeting training needs was their priority. However, they also told us that flights provided by CRAF participants are less expensive than military flights, in part because commercial aircraft are designed to be more fuel-efficient, while military aircraft are designed to carry heavy cargo and land in austere locations. In addition, according to AMC data, once training requirements have been met, using commercial carriers for airlift missions can be less costly than using military aircraft. For example, according to an April 2013 analysis provided by AMC officials, the cost per pound to transport cargo using commercial carriers such as the 747 and MD-11 can be between 22 and 35 percent lower than the cost of transporting the same cargo using military aircraft such as the C-5 and C-17. Currently, airlift requests are handled by different sections within the Tanker Airlift Command Center (TACC), depending on the type of airlift requested. Each of these sections has a different process for choosing whether to use commercial or military airlift to meet the request. Some airlift missions are conducted primarily by military airlift, while others are conducted by commercial sources. However, while TRANSCOM performs periodic monitoring of the distribution of missions between military and commercial sources, officials acknowledged that this monitoring does not consider the extent to which training requirements have already been met or will be met with planned missions. According to DOD officials, airlift missions that are not conducted to satisfy training requirements should be performed by CRAF participants, except when there is some other feature of the mission that requires military airlift. Knowing when more flying hours are going to be used than are needed to meet training requirements—and using this information to shift eligible airlift missions to CRAF participants—would allow DOD to use commercial sources of transportation to the maximum extent practicable. DOD officials told us that operations in Iraq and Afghanistan had ensured that there were enough airlift missions available both to support training requirements and to provide adequate peacetime business for CRAF participants. Further, they noted that there are a number of reasons that DOD might exceed its flying hours, such as the need to transport particularly large cargo, special conditions that require military aircraft (such as unpaved runways), and restrictions on U.S. carriers operating in Iraq and most of Afghanistan. Given such requirements, officials questioned the utility of developing a process to monitor the balance between satisfying flying hour training requirements and providing CRAF participants with additional peacetime business; they said that they were uncertain how many additional missions would be eligible to be flown by commercial carriers. However, TRANSCOM and AMC officials have acknowledged that they have not collected data that would allow them to determine how many of these missions could be shifted to CRAF participants. Furthermore, while we acknowledge that there may be a number of legitimate reasons why military aircraft would have to be used for missions even after training requirements have already been met, it is not clear that such reasons are always present when military airlift is used. For example, AMC completed a study in December that was intended, in part, to address short-term concerns regarding the CRAF program and its participants. This study noted that some missions were flown on military aircraft only because the necessary load plans for commercial aircraft had not been developed in a timely manner—not because of any requirement that the cargo be flown on military aircraft. The study recommended that DOD airlift customers develop commercial load plans to facilitate scheduling of commercial aircraft in these situations. This study acknowledges that some missions currently flown by military airlift could instead be flown by CRAF participants without negatively affecting training hours. After the drawdown in Afghanistan concludes, the need for airlift is expected to decline, which will reduce both training opportunities and the business available for CRAF participants. In addition, as airlift needs decrease, DOD may need to fly a higher percentage of its channel missions in order to provide its crews with sufficient training opportunities, which could further decrease its use of CRAF participants. DOD officials told us that they expect peacetime business to fall significantly after fiscal year 2015. This decrease has already begun; peacetime revenues of CRAF participants have already dropped by nearly one third, from their high point of approximately $3 billion in fiscal year 2010 to about $2 billion in fiscal year 2012, as shown in figure 5. Commercial carriers are projected to be used even less in fiscal year 2013 and beyond, until revenues return to pre-September 11, 2001 levels of $700 million or less. This represents a potential 66 percent decline in DOD business available to CRAF participants, which may further exacerbate the economic pressures under which CRAF participants are operating. By not using the information it has on flying hours to help determine when it can allocate eligible airlift missions to CRAF participants, DOD loses the ability to determine whether it is using commercial sources—such as CRAF participants—to the maximum extent practicable, as required by DOD guidance. As a result, DOD may be using its military fleet more than necessary thereby risking reduced participation of commercial carriers in the CRAF program. DOD provided several reasons for restricting commercial carriers from transporting partial plane loads of cargo over certain routes, based on its need to promote efficiency, meet its military airlift training requirements, and fulfill peacetime business obligations to CRAF participants. According to TRANSCOM officials, in 2001, DOD began restricting commercial air carriers from transporting partial plane loads of cargo over certain overseas channel routes in order to improve the efficiency and effectiveness of the cargo missions flown over these routes and keep cargo flights in the channel route system that DOD relies on to satisfy its training requirements and business obligations to CRAF participants. In May 2012, TRANSCOM issued a memorandum reiterating its policy of restricting commercial aircraft—including CRAF participants—from transporting partial plane loads of cargo over these routes. According to TRANSCOM officials responsible for coordinating airlift for DOD, this policy—which has been in place for over a decade—is a tool to help DOD increase the efficiency of its cargo shipments airlifted over channel routes and minimize costs to DOD. DOD officials reported that in the late 1990’s and early 2000’s commercial air carriers began transporting an increasingly larger share of DOD cargo shipments, leaving a relatively small amount of cargo for military aircraft to transport over channel routes. DOD officials said that before the policy was implemented, military aircraft would often conduct channel route missions with partial loads of cargo, instead of completely filling the aircraft with cargo, which is more cost effective. In addition, in the late 1990’s and early 2000’s, DOD was experiencing a shortage of flying hours for training. During this same period, commercial carriers were flying a large number of airlift missions, which exacerbated DOD’s flying hour shortage, because many of the airlift missions that military aircrews could have conducted for training purposes were being lost to commercial air carriers. Lastly, according to TRANSCOM officials, because many of the partial planeload missions performed by commercial carriers were negotiated under tender contracting arrangements— which are not included in the annual amount of peacetime business DOD guarantees to the CRAF program—DOD’s ability to fulfill its peacetime business obligations to CRAF was being challenged. The National Airlift Policy states that military and commercial resources are equally important— and interdependent—in fulfilling the national defense airlift objective. The policy also provides that the goal of the U.S. government is to maintain in peacetime military airlift resources that are manned, equipped, trained, and operated to ensure the capability to meet wartime requirements. DOD guidance also notes that TRANSCOM may be required to maintain a readiness posture that includes operating military airlift internationally during peacetime, and that it must conduct such operations at the level necessary to meet operational and training requirements. According to DOD officials who are responsible for managing DOD’s strategic airlift requirements, TRANSCOM takes steps to meet DOD’s flying hour training requirements while also providing commercial carriers with peacetime business; however the flying hour training requirement takes precedence. DOD performs a variety of types of airlift missions that allow military aircrews to meet their flying hour training requirements while also delivering the cargo needed to sustain military operations to military units located overseas. These mission types include: Channel airlift missions: regularly scheduled airlift for movement of sustainment cargo and/or personnel between designated aerial port of embarkation and aerial port of debarkation over validated contingency or distribution channel routes. Special assignment airlift missions: airlift missions requiring special pickup/delivery at locations other than those established within the approved channel structure or requiring special consideration because of the number of passengers, weight or size of the cargo, urgency or sensitivity of movement, or other special factors. Contingency missions: airlift for movement of cargo and/or personnel in support of military operations directed by appropriate authority to protect U.S. interests. Exercise missions: airlift for movement of cargo and/or personnel in support of a military maneuver or simulated wartime operation involving planning, preparation, and execution that are carried out for the purpose of training and evaluation. Theater direct delivery: a theater-based distribution system wherein delivery to destinations forward of major aerial ports of debarkation can be performed by any available aircraft, including those normally used for intertheater requirements. Channel route missions are conducted by both military and CRAF participants and account for a large portion of DOD’s overall airlift activity. During the last three fiscal years, at least 30 percent of DOD’s total cargo movement was over channel routes. See figure 6 below. TRANSCOM officials stated that to maximize efficiency, DOD requires aircraft conducting channel route missions—whether they are military or commercial—to be completely full of cargo before takeoff. According to TRANSCOM officials, the policy restricting commercial carriers from transporting partial loads over channel routes provides DOD with a tool to maximize the amount of cargo transported in a single mission over a channel route. Cargo previously transported by commercial carriers in partial loads is now consolidated at aerial ports of embarkation. TRANSCOM officials reported that historically, commercial carriers transporting partial loads had been conducting a large portion of DOD’s airlift business. These commercial airlift missions involved transporting cargo to and from locations that were also being serviced by military aircraft conducting channel missions. DOD was not maximizing the efficiency of its channel route missions and minimizing costs, because aircraft were not filled to capacity. To reduce the redundancy of transporting cargo using both modes of delivery, DOD began restricting commercial carriers from conducting partial plane load missions over channel routes, and it now generally requires commercial aircraft conducting channel missions to be full of cargo before takeoff. According to TRANSCOM officials, the policy ultimately played a role in increasing the efficiency of DOD’s air cargo movements over channel routes. According to a RAND report issued in 2003 that analyzed the costs associated with transporting cargo over channel routes using commercial airlift versus military airlift, DOD would decrease airlift cost if it reduced the amount of cargo transported by commercial carriers conducing partial plane load missions, and shifted that cargo to be transported in a full plane load aircraft. Taking this step would be less expensive than allowing military aircraft to conduct partial plane load missions over channel routes. In addition, DOD’s policy allows it to offer more training opportunities for its aircrews during periods of low demand for airlift. Rather than relying on other types of missions—such as contingency missions— to accomplish training, AMC prefers to schedule flying hours for training on channel route missions, which are regularly scheduled, planned in advance, consistent, and predictable. Channel route missions are used to maintain and upgrade pilots’ flying skills and, as part of the training, can include transporting cargo from specific military locations within the United States—such as McGuire Air Force Base in New Jersey—to overseas military bases located in countries like Germany or Kuwait. These missions are conducted on a regularly scheduled basis and include DOD cargo, so they provide commanders with reassurance that they will receive planned amounts of sustainment cargo within a designated time frame. TRANSCOM officials told us that in the late 1990’s and early 2000’s commercial aircraft had been conducting a large portion of DOD’s airlift business, but the overall demand for DOD airlift was relatively low; as a result the military began experiencing a shortage of flying hours to use for training. Many of the airlift missions flown by commercial carriers involved transporting cargo that could have been transported by military aircraft. DOD’s policy of restricting commercial carriers from transporting partial loads over channel routes has allowed DOD to shift cargo into the channel route system, increase the number of channel route missions available for aircrews to satisfy flying hour training requirements, and address DOD’s flying hour shortage. In 2003, the RAND Corporation conducted a study of the peacetime tempo of DOD’s air mobility operations and asserted that DOD needed to take steps to address its shortage of flying hours. The report found that, during fiscal year 2000 and 2001, aircrew personnel encountered a flying hour shortage because international military activity was relatively calm and there were fewer U.S. missions that required airlift support. The report also pointed out that because commercial carriers had begun conducting a large portion of DOD’s airlift business, decreasing the amount of airlift business given to commercial carriers would help reverse this trend and help alleviate DOD’s flying hour shortage. The report’s conclusions supported measures taken by DOD to implement a policy to decrease peacetime business provided to commercial carriers when necessary, to support training requirements. With DOD’s policy in place, more cargo was being funneled into the channel route system, and DOD was able to increase the number of channel route missions offered to military aircrews, thereby helping to alleviate the shortage of flying hours. Further, TRANSCOM officials said that DOD’s policy of restricting commercial carriers from transporting partial planeloads of cargo over certain channel routes was also implemented in part to help DOD fulfill its peacetime business obligations to CRAF. Through the CRAF peacetime airlift contract, DOD provides a certain level of airlift business to CRAF participants; DOD negotiates a designated amount of business that it is committed to provide to CRAF participants—as an incentive for commercial carriers to participate in the CRAF program—and distributes this business among the CRAF participants currently enrolled in the program. This business consists, in part, of missions flown across channel routes. TRANSCOM officials reported that many airlift missions conducted by commercial carriers carrying partial loads across channel routes were being arranged through tender-based contractual agreements. Tender-based agreements for airlift services are offers by an air carrier to provide transportation services at a specified rate. According to TRANSCOM officials, business associated with tender- based contracts falls outside of the CRAF peacetime business entitlement obligation. TRANSCOM officials said that this practice was diminishing the pool of peacetime business that DOD could provide to CRAF participants under the CRAF peacetime business entitlement process. According to TRANSCOM officials, the policy limiting the amount of tender-related airlift business provided to commercial carriers increases the efficiency of channel route missions, alleviates the shortage of flying hours for training, and allows DOD to provide CRAF participants with peacetime business to fulfill its CRAF peacetime business obligations. In addition, TRANSCOM officials said that in periods of high demand for airlift, such as the last several years, DOD can provide CRAF participants with more channel route business, because military aircrews can satisfy their training requirements for flying hours by conducting other airlift missions, such as contingency and special assignment airlift missions. Some CRAF participants expressed concerns to us that the original rationale for DOD’s policy no longer exists and that the policy may prevent DOD from using the less costly commercial airlift option to transport partial loads of cargo over channel routes. First, according to TRANSCOM officials, the original rationale for the policy was to ensure that DOD could provide sufficient flying hours to train its aircrews. Two of the CRAF participants we interviewed stated that this policy was no longer necessary because DOD no longer faces flying hour shortages as it did in the late 1990’s and early 2000’s. DOD officials stated that it is important to retain this policy as a management tool, especially since DOD’s need for airlift is projected to return to pre-September 2001 levels by 2015. According to DOD officials, data from fiscal year 2000 illustrate this point; in 2000, DOD needed to reserve about 57 percent of its channel route missions for training and provided about 28 percent of the channel route missions to CRAF participants. In contrast, during a period of high demand for airlift, such as fiscal year 2012, DOD reserved about 31 percent of its channel route missions for training and was able to provide more than 60 percent of its channel route missions to CRAF participants. See figure 7 below. Second, a CRAF participant we met with emphasized that using commercial aircraft to transport partial loads is less costly than using military aircraft to transport partial loads, because when using commercial airlift, DOD pays by the pound and only for the cargo airlifted, rather than incurring the entire cost of using a military aircraft to carry a partial load. DOD officials acknowledge that using commercial carriers to transport partial plane loads of cargo is less expensive than using military aircraft for this purpose and note that the policy restricting commercial carriers from transporting partial plane loads of cargo over certain overseas channel routes has a provision to allow commercial carriers to conduct such missions on a case-by-case basis, when needed to meet DOD’s requirements. For example, if a customer requires a critical, time- sensitive item and cannot wait for it to be transported by a regularly scheduled channel mission, commanders still may have the option to use a commercial carrier to transport a partial load to a designated location using a channel route. In addition, commercial carriers can transport cargo outside the channel route system under a variety of other DOD airlift transportation contracts. For example, TRANSCOM’s World-Wide Express program, an airlift transportation program available only to CRAF participants, is used to provide international commercial express package transportation service for shipments up to and including 300 pounds. This program provides DOD with the ability to ensure that commanders can receive unique, time-sensitive cargo items when no channel mission is available within a specified time frame. Over the last five years, this program has consistently generated over $100 million dollars worth of airlift business annually for CRAF participants. According to TRANSCOM and AMC analysis, as the drawdown efforts in Afghanistan proceed over the next few years, airlift demand is expected to decline to pre-September 11, 2001 levels. It will therefore be important for DOD to plan and ensure that military aircrews are provided with ample opportunity to fulfill training requirements and avoid a shortage of flying hours. In preparation for this decline in the demand for airlift, TRANSCOM officials emphasized that DOD’s policy to restrict commercial carriers from transporting partial loads over channel routes may continue to serve as an important management tool and allow DOD to balance the goals of operating its channel route system as efficiently as possible, providing enough training opportunities to military aircrews, and fulfilling its CRAF peacetime business obligations. DOD is conducting several interrelated studies to determine its future airlift requirements; however it is unclear whether the planned size of CRAF will be adequate to meet future airlift requirements. The National Defense Authorization Act for Fiscal Year 2013 requires DOD to conduct a study that assesses its mobility needs—referred to as the Mobility Requirements and Capabilities Study 2018, which DOD had not begun at the time of our review. In addition, in response to the changing business environment, AMC is also conducting a two-phase study to assess the readiness of CRAF participants to augment DOD’s airlift capacity and the viability of the CRAF program. The CRAF Phase 1 study was completed in December 2012, and according to officials the Phase 2 is scheduled to be completed in the fall of 2013. Meanwhile, DOD has been taking steps to continue to encourage commercial carriers to participate in the program. Until DOD finalizes these assessments, it will be unclear whether the planned size of CRAF will be adequate to meet future airlift requirements. DOD reports that there are more aircraft committed to the CRAF program than are needed to fulfill the wartime requirements established by the Mobility Capability Requirements Study 2016 (MCRS–16), which was issued in 2010. However, it is not clear whether the current level of CRAF participation will provide the right number and mix of aircraft to meet future requirements, since DOD has issued new strategic guidance that may affect DOD’s airlift requirements. While the number of aircraft pledged to the program has fluctuated, DOD’s past analysis showed that the projected size and mix of the CRAF fleet was more than adequate to satisfy all war planning scenarios established by the MCRS–16. According to DOD data, as of March 2012, CRAF participants had enrolled 15 percent more aircraft in the program than would be needed to meet established airlift requirements. The MCRS–16 assessed components of the mobility capabilities that DOD would need for possible future strategic environments and was intended to help DOD make investment decisions regarding mobility systems, such as how much to invest in strategic airlift to meet wartime needs. Among other things, the study examined how changes in the mobility system affect the outcomes of major operations and assessed the associated risks. The MCRS–16 determined that, with few exceptions, the mobility capabilities projected for 2016 would be sufficient to support the most demanding projected requirements. The study assessed the major mobility systems required to move personnel and supplies from their point of origin to their destination: sealift, surface transportation, and airlift components, to include strategic airlift, aerial refueling, and CRAF passenger and cargo. To support decisions regarding future mobility force structure, the MCRS–16 developed three demanding cases consisting of conflicts and natural disasters with multiple scenarios occurring over a 7- year period and requiring the use of mobility capabilities. The MCRS–16 used approved DOD planning scenarios to develop the three cases. For example, in one case, U.S. forces might be required to conduct a large land campaign and a long-term irregular warfare campaign while also responding to homeland defense missions. In another case, U.S. forces might be conducting two nearly simultaneous large-scale campaigns, while also responding to three nearly simultaneous domestic events and conducting other operations. Since its last assessment of its airlift requirements in 2010, DOD has issued new strategic guidance. Specifically, DOD’s strategic guidance issued in January 2012 calls for, among other things, an increased focus on the Asia-Pacific region and resizing U.S. forces, both of which may affect airlift needs. For example, an increased focus on the Asia-Pacific region could affect operational plans in that theater and require changes to the number and type of forces assigned to the region, as well as the associated airlift requirements. In addition, the resizing of DOD forces to achieve security objectives could have implications for the choice of commercial and military aircraft used to support future military operations. In March 2013, the Secretary of Defense tasked DOD senior leadership to examine the department’s strategic assumptions, following up on the January 2012 Defense Strategic Guidance which, among other things, called for rebalancing military forces toward the Asia-Pacific region. This review examines the choices underlying the department’s strategy, force posture, investments, and institutional management, as well as past and future assumptions, systems, and practices. The results of the review will frame the secretary’s guidance for the fiscal 2015 budget and will be the foundation for the Quadrennial Defense Review expected to be issued in February 2014. The National Defense Authorization Act (NDAA) for Fiscal Year 2013 requires DOD to conduct a new mobility capabilities requirements study— referred to as the Mobility Requirements and Capabilities Study 2018 (MRCS–18)—based in part on the new defense strategy mentioned above. This new assessment may provide decision makers with the analytical data needed to determine DOD’s airlift capability requirements and the number and type of aircraft CRAF participants would need to pledge to the program in order to support these requirements. Among other things, the NDAA requires DOD to describe and analyze the assumptions made by the Commander of the U.S. Transportation Command with respect to aircraft usage rates, aircraft mission availability rates, aircraft mission capability rates, aircrew ratios, aircrew production, and aircrew readiness rates; assess the requirements and capabilities for major combat operations, lesser contingency operations as specified in the Baseline Security Posture of the Department of Defense, homeland defense, defense support to civilian authorities, other strategic missions related to national missions, global strike, the strategic nuclear mission, and direct support and time-sensitive airlift missions of the military departments; and identify mobility capability gaps, shortfalls, overlaps, or excesses and assess the risks associated with the ability to conduct operations and recommended mitigation strategies where possible. Until DOD completes the MRCS–18, decision makers in DOD and Congress may not have all of the relevant information they need to ensure that DOD’s mobility capabilities and requirements are sized most effectively and efficiently to support the U.S. defense strategy. DOD acknowledges the requirements set forth in the National Defense Authorization Act for Fiscal Year 2013 and fully intends to cooperate and work to complete the assessment, but according to AMC and TRANSCOM officials, no time frame has been established for when this study will be completed. Further, AMC has begun conducting additional studies to assess its airlift requirements and how the CRAF program will support near-term and long-term requirements. AMC’s CRAF study is being conducted in two phases and will help AMC to ensure that the commercial airlift forces associated with the CRAF program are prepared to support the drawdown of forces in Afghanistan by the end of calendar year 2014. Phase 1 of the CRAF study, completed in December 2012, focused on the international long-range segment of CRAF, which will be most affected by the decreasing demand for airlift resulting from the drawdown of forces in Afghanistan. It identifies a series of issues facing CRAF during the withdrawal and for the short term following the drawdown. A number of observations are directly related to the drawdown period and the period immediately following. These require near-term actions to ensure that commercial airlift support will be available when it is needed to support national interests. For example, the Phase 1 study noted that a future study should assess the risk and reward factors that may affect further CRAF participation due in part to the state of flux in the current charter air industry resulting from economic pressures brought on by a decline in commercial passenger charter opportunities. In addition to discussing certain recommendations from the Phase 1 study, Phase 2 of the CRAF study will focus on maintaining the future viability of the CRAF program and its readiness to augment military airlift capability and support surge requirements. This follow-on study will undertake an in-depth analysis of issues identified in Phase 1 that could affect the long-term viability and reliability of the CRAF program. The findings from the Phase 2 study will propose courses of action and mitigation strategies to ensure CRAF readiness now and in the future, balancing government interests and mandates with the dynamics of the changing industry. Furthermore, the CRAF Phase 2 study will evaluate the market, the carriers and their business base, and the existing business models within industry and government in order to provide insights and recommend actions to ensure that the CRAF program can continue to meet wartime requirements in the future. AMC and TRANSCOM expect this study to be completed by the fall of 2013. According to AMC officials, one of the issues that will be addressed in the Phase 2 study is the recommendation from the Phase 1 study that DOD continue the suspension of the 60/40 rule through fiscal year 2014. The 60/40 rule was created as a safeguard for DOD. Under the 60/40 rule, DOD business cannot provide more than 40 percent of a carrier’s revenue and the remaining 60 percent of the carrier’s revenue must be earned through sources other than DOD, generally referred to as commercial sources or commercial air transportation. Carriers that earn more than 40 percent of their revenue from DOD may be penalized by reductions in their entitlement to DOD business. Prior to fiscal year 2010, the rule was based on an air carrier’s revenue. However, in 2010 the rule was modified so that it calculated the percentage of business in block hours rather than amount of revenue. As of May 2010, the rule has been suspended. One of the original goals of the 60/40 rule was to ensure that CRAF carriers maintained a strong commercial business base, efficient operations, and modern fleets to help prevent them from going out of business when DOD demands were low. Limiting the proportion of DOD business carriers could have would also provide DOD with a surge capability to draw on if demand grew suddenly. According to TRANSCOM and AMC officials, the 60/40 rule was suspended so that commercial carriers would not be penalized for supporting increased DOD airlift demands. If carriers continued to increase support while still being required to observe the 60/40 rule, the rule would prove to be counterproductive. DOD would be asking for increased support while potentially issuing penalties to those carriers providing the increased support. Some of the carriers we spoke with stated that the 60/40 rule had not been strictly enforced and that the suspension of the rule had no effect on the amount of business they received as a result of participating in the CRAF program. However, according to DOD officials, five carriers have gone bankrupt in the last three years and two of them have stopped offering airlift services even though this rule has been suspended. Based on data included in the MCRS–16, DOD counts on the CRAF program to provide most of the passenger airlift services as well as a significant amount of the cargo services to support wartime requirements. Therefore, CRAF must maintain the ability to respond in order to meet combatant commander requirements. DOD must develop accurate requirements if CRAF is to maintain the ability to respond to these requirements. For that reason, until DOD completes the MRCS–18 and the CRAF Phase 2 study, it will be unable to determine the correct size and mix of the CRAF fleet to meet future airlift requirements. The nature of U.S. military operations in today’s global environment requires DOD to be able to rapidly deploy personnel and cargo around the world and sustain forward deployed forces. DOD has taken a number of steps to strengthen the CRAF program while also ensuring that military aircrews receive required training. However, over the last few years, DOD has flown more hours than required to train its aircrews, thereby possibly reducing the level of peacetime business available to CRAF participants. The anticipated decline in DOD’s peacetime business over the next few years, combined with continuing business pressures in a highly competitive industry, highlight the need for a process to ensure that DOD maximizes the use of its commercial partners. However, DOD does not use the process it has for monitoring training hours to determine when it can allocate eligible airlift missions to CRAF participants. If DOD does not use the information provided by its existing process, it will be unable to determine whether it is using commercial carriers to the maximum extent practicable, as required by DOD guidance. Further, DOD may be using its military fleet—which officials say is more expensive to operate than commercial alternatives—more than necessary, while risking the CRAF participation needed to ensure wartime readiness. To balance the use of military and civilian aircraft and ensure that commercial carriers participating in the CRAF program are used to the maximum extent practicable, we recommend that the Secretary of Defense direct the Secretary of the Air Force and the Commander, U.S. Transportation Command—in conjunction with the Commander, Air Mobility Command—to use the Air Mobility Command’s existing process for monitoring training hours to determine when it can shift eligible peacetime airlift workload from military to commercial sources. We provided a draft of this report to DOD for comment. In its written comments, reproduced in appendix IV, DOD concurred with our recommendation and stated that it believes implementing the recommendation will further improve the Civil Reserve Air Fleet program. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Secretary of the Air Force, the Under Secretary of Defense (Acquisition, Technology and Logistics), and the Commander, Air Mobility Command. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or merrittz@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine whether DOD has been meeting its training requirements we reviewed Air Force guidance on the development of flying hour requirements, as well as DOD guidance on flying hours for training. We also spoke with officials from U.S. Transportation Command (TRANSCOM) and Air Mobility Command (AMC) about how the training requirements are developed. We then analyzed flying hour data from the Reliability and Maintainability Information System (REMIS) to determine the extent to which the airlift fleet—C-5, C-17, and C-130—was being flown in excess of training requirements. We assessed the reliability of these data by interviewing officials from the REMIS program office at the Air Force Life Cycle Management Center to understand the steps that have been taken to ensure the reliability of the database. In addition, we reviewed documentation relating to the system and compared the data with alternate sources. We concluded that the data from REMIS were reliable for the purposes of this engagement. We then compared the flying hour data from REMIS to the flying hour requirements developed by AMC. To determine whether DOD was providing Civil Reserve Air Fleet (CRAF) participants with peacetime business, we interviewed officials with TRANSCOM and AMC on the management of the CRAF program, recent changes that had been made to the program, and concerns about the future of the program. In addition, we conducted interviews with representatives from 21 of the 30 CRAF participants who responded to our request for an interview in October and November 2012, to obtain information on the CRAF program, their perspective on what elements of the program worked and which did not, and their willingness to participate in the program in the near future. Since that time, 2 CRAF participants— one of which we interviewed—have gone out of business and are no longer members of the CRAF program. As of April 2013, there were 28 CRAF participants included in the CRAF program. We also analyzed program-related documents from TRANSCOM, AMC, and CRAF participants, as well as guidance on the use of CRAF and commercial transportation. Furthermore, we analyzed data from fiscal years 2001 through 2012 from two systems—the Commercial Operations Integrated System and an internal database managed by the Tanker Airlift Command Center (TACC) within AMC—to understand the extent to which CRAF participants are used compared with military airlift and foreign carriers. We assessed the reliability of these sources by reviewing documentation on to the systems, comparing these data with data from alternate sources and conducting interviews with knowledgeable officials. We concluded that the data from these systems were reliable for the purposes of this engagement. To assess the extent to which DOD has justified restricting commercial carriers from transporting partial plane loads of cargo over channel routes, we reviewed DOD’s policy for restricting commercial carriers from flying over channel routes. The policy we reviewed helped us identify which channel routes were designated as restricted. We then conducted interviews with TRANSCOM and AMC officials to obtain information on the rationale for creating the policy and what operational and strategic benefits the policy provides for DOD. In addition, we reviewed fiscal year 2000 and fiscal year 2012 channel route airlift transportation data to determine the extent to which DOD was using military aircraft rather than CRAF participants to conduct channel route missions, and we discussed the circumstances surrounding those decisions with TRANSCOM officials. We also conducted interviews and obtained written responses from CRAF participants to obtain additional perspectives on how the policy is affecting the CRAF program. We also reviewed previously written reports and studies conducted by the RAND Corporation and the Council for Logistics Research Inc. that addressed DOD’s use of channel routes, the impact of utilizing commercial carriers in lieu of military aircraft on DOD’s aircrew training program, and the impacts the policy has had on overall cargo management. Reviewing this historical information provided us with additional insight into DOD’s justification for implementing the policy. To assess whether DOD has established future requirements for the CRAF program and how the planned size of CRAF compares with those requirements, we obtained and reviewed various studies conducted by DOD to assess its strategic airlift capabilities, such as DOD’s Mobility Requirements and Capabilities Study – 2016, and the AMC 2012 CRAF study. We also collected fiscal year 2011 through 2013 data documenting DOD’s current inventory of CRAF aircraft and compared these data with DOD’s current airlift requirements. In addition, we conducted interviews with TRANSCOM and AMC officials to determine what steps are being taken to establish future requirements and to gain their perspective on the challenges they expect to face as they continue to manage the CRAF program. We also reviewed a provision in the National Defense Authorization Act for Fiscal Year 2013 that requires DOD to conduct a new study of mobility capabilities and requirements. We discussed the status of the requirement with TRANSCOM and AMC officials to determine what time frames and milestones have been established to begin and complete this study. We also reviewed DOD’s defense strategic guidance issued in January 2012 to assess factors that may affect DOD’s future airlift needs. To gather information for these objectives, we reviewed documentation and interviewed officials from the following organizations: The Office of the Under Secretary of Defense for Acquisition, Office of the Deputy Assistant Secretary of Defense (Transportation Policy) Strategy, Policy, and Logistics (TCJ5/4) Acquisition (TCAQ) Office of the Staff Judge Advocate (TCJA) Enterprise Readiness Center (ERC) J-3 Operations and Plans, Sustainment Division (TCJ3-G) 618th Air and Space Operations Center (TACC) Commercial Airlift Division (A3B) See appendix III for the CRAF participants we interviewed National Air Cargo Association (NACA) We conducted this performance audit from August 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence and data obtained was sufficiently reliable for our purposes, and provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: List of Civil Reserve Air Fleet Participants (As of April 2013) CRAF Carrier 1. ABX Air, Inc. 2. Air Transport International LLC 3. Alaska Airlines, Inc. 4. Allegiant Air LLC 5. American Airlines, Inc. 6. Atlas Air, Inc. 7. Delta Air Lines, Inc. 8. Evergreen International Airlines, Inc. 9. Federal Express Corp. 10. Hawaiian Airlines, Inc. 11. Jet Blue Airways Corp. 12. Kalitta Air LLC 13. Lynden Air Cargo LLC 14. Miami Air International, Inc. 15. MN Airlines LLC (DBA Sun Country Airlines) 16. National Air Cargo Group, Inc.(DBA Murray DBA National Airlines) 17. North American Airlines, Inc. 18. Northern Air Cargo 19. Omni Air International, Inc. 20. Polar Air Cargo Worldwide, Inc. 21. Ryan International Airlines, Inc* 22. Sky Lease 1, Inc. (DBA Trade Winds Airlines) 23. Southern Air, Inc. 24. Southwest Airlines Company 25. Tatonduk Outfitters, Ltd. (DBA Everts Air Cargo) 26. United Airlines, Inc. 27. United Parcel Service Company 28. US Airways, Inc. 29. World Airways, Inc. In addition to the contact named above, Suzanne Wren, Assistant Director; Jim Ashley; Namita Bhatia-Sabharwal; Jason Jackson; James Lackey; Joanne Landesman; Tamiya Lunsford; Michael Shanahan; Mike Shaughnessy; and Amie Steele made key contributions to this report.
To move passengers and cargo, DOD supplements its military aircraft with cargo and passenger aircraft from volunteer commercial carriers participating in the CRAF program. Participating carriers commit their aircraft to support a range of military operations in exchange for peacetime business. A House Armed Services Committee mandated GAO to report on matters related to the CRAF program. GAO assessed whether DOD (1) met its military airlift training requirements while also using CRAF participants to the maximum extent practicable, (2) provided justification for restricting commercial carriers from transporting partial plane loads of cargo over certain routes, and (3) has established future requirements for CRAF and how the planned size of CRAF compares to those requirements. GAO reviewed guidance and policies pertaining to the program, flying hour data, and DOD-sponsored CRAF study reports. GAO also interviewed DOD and industry officials. DOD exceeded the flying hours needed to meet military training requirements for fiscal years 2002 through 2010 because of increased operational requirements associated with Afghanistan and Iraq; however it does not know whether it used Civil Reserve Air Fleet (CRAF) participants to the maximum extent practicable. DOD guidance requires it to meet training requirements and to use commercial transportation to the "maximum extent practicable." During fiscal years 2002 through 2010, DOD flew its fleet more than needed to train its crews, although its flying has more closely matched its training needs in recent years. DOD has also used CRAF participants extensively to supplement military airlift. Although DOD has taken steps to make more airlift business available to CRAF participants, officials said that overseas operations have provided enough missions to support both training and CRAF business obligations. However, with the drawdown in Afghanistan, DOD officials expect the need for airlift to decline by at least 66 percent--to pre-September 2001 levels--reducing both training hours available for DOD and business opportunities for CRAF. DOD does not use its process for monitoring flying hours to determine when it will exceed required training hours and allocate eligible airlift missions to CRAF participants. Therefore, it cannot determine whether it is using CRAF to the maximum extent practicable. As a result, DOD may be using its military fleet more than necessary--which officials say is less economical--while risking reduced CRAF participation. DOD provided several reasons for restricting commercial carriers from transporting partial plane loads of cargo over channel routes, including the need to promote efficiency, meet its military airlift training requirements, and fulfill peacetime business obligations to CRAF participants. Channel route missions are regularly scheduled airlift missions used to transport cargo and provide aircrew training time. These missions also help DOD provide business to CRAF participants. According to U.S. Transportation Command (TRANSCOM) officials, DOD generally requires aircraft conducting channel route missions to be completely full of cargo before takeoff. The policy restricting carriers from flying partial loads over channel routes allows DOD to consolidate cargo previously flown by commercial carriers in less than full plane loads and redirect that cargo into the channel route system, where it will be transported by either commercial or military aircraft as part of a full plane load mission. According to DOD, consolidating cargo into full loads flown over the channel route system has increased both the efficiency of these missions and the availability of missions that DOD uses to train its crews and fulfill its business obligations to CRAF. It is unclear whether the planned size of CRAF will be adequate to meet future airlift requirements. DOD last established its future requirements based on the wartime scenarios in the Mobility Capability Requirements Study 2016, issued in 2010. However, due to changing military strategy and priorities, the 2010 study does not reflect current mission needs. The National Defense Authorization Act for Fiscal Year 2013 requires DOD to conduct a new mobility capabilities and requirements study. DOD has not begun this study or finalized its ongoing reviews of the CRAF program's ability to support future requirements. Once they are finalized, these studies should allow DOD to better understand future requirements for CRAF and whether the CRAF program will meet future airlift requirements. GAO recommends that the Secretary of Defense direct the Secretary of the Air Force and the Commander, U.S. Transportation Command—in conjunction with the Commander, Air Mobility Command—to use its existing processes for monitoring training to determine when it can shift its distribution of peacetime airlift workload from military to commercial sources. In comments on a draft of this report, DOD concurred with GAO’s recommendation and stated that it believes implementing the recommendation will further improve the Civil Reserve Air Fleet program.
The CFO Act of 1990 was enacted to address longstanding problems in financial management in the federal government. The act established CFO positions throughout the federal government and mandated that, within each of the largest federal departments and agencies, the CFO oversee all financial management activities relating to the programs and operations of the agency. Among the key responsibilities of CFOs are overseeing the recruitment, selection, and training of personnel to carry out agency financial management functions. Recognizing that a qualified workforce was fundamental to achieving the objectives of the CFO Act and other related management reform legislation aimed at improving federal financial management, the Human Resources Committee of the Chief Financial Officers Council and the Joint Financial Management Improvement Program (JFMIP) have made proposals addressing the recruitment, training, retention, and performance of federal financial management personnel. In November 1995, JFMIP published the Framework for Core Competencies for Financial Management Personnel in the Federal Government, designed to highlight the knowledge, skills, and abilities that accountants, budget analysts, and other financial managers in the federal government should possess or develop to perform their functions effectively in accordance with the CFO Act. JFMIP stressed the need for federal government financial managers to be well-equipped to contribute to financial management activities, such as the execution of budgets, under increasingly constrained resource caps, and the preparation, analysis, and interpretation of consolidated financial statements. A primary goal in this body of work is to obtain and share with DOD information on the formal education, professional work experience, training, and professional certifications of key financial managers in the department, including the Office of the Under Secretary of Defense (Comptroller), each of the military services, and the Defense Finance and Accounting Service. The objective of this assignment is to provide information on the formal education, professional work experience, training, and professional certifications of personnel serving in key financial management positions in the Army. We obtained this information from biographies and profile instruments due to the concerns of Army officials regarding the completeness of personnel databases and personnel files. We worked with Army officials to determine the key financial management positions to be included in this review. These positions typically included resource managers, deputy resource managers, and budget officers serving at Army major commands and installations. As agreed with the Army, we did not verify the information contained in the profiles provided by the respondents. A more detailed discussion of our scope and methodology, including a description of how we obtained qualifications and work experience data, is in appendix I. We performed our audit work from March through December 1997 in accordance with generally accepted government auditing standards. The Assistant Secretary of the Army (Financial Management and Comptroller) provided comments on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendix IX. Table 1 provides information on the formal education, careers, and professional certifications of the Department of the Army’s four executives included in our review. All four held both bachelor’s and master’s degrees. Bachelor’s degree majors included mathematics, education, accounting, and engineering, while those associated with master’s degrees included public administration, business administration, and civil engineering. The Assistant Secretary had spent 30 years at DOD. The three Deputy Assistant Secretaries’ DOD careers ranged from 29 to 38 years. In addition to his 38-year career at DOD, one of the Deputy Assistant Secretaries also spent 4 years in the private sector. A review of biographical information provided to us showed that three executives had served in financial management-related positions during most of their DOD careers. These positions involved the functional areas of accounting, auditing, budgeting, programming, costing, and manpower requirements at all levels of DOD, including another military department and various Defense agencies. While the fourth executive had served mainly in engineering-related positions during his 33-year DOD career, he had also recently served as Director of Resource Management at the U.S. Army Forces Command. Two executives were Certified Government Financial Managers. In collaboration with Army officials, we identified 301 financial managers across the department for this review, of which 233 (or 77 percent) responded by providing information on their qualifications and experience. Respondents included the 14 managers from the Office of the Assistant Secretary of the Army (Financial Management and Comptroller)—ASA(FM&C); 85 of 108 managers from eight operational commands and their 43 of 46 managers from the U.S. Army Training and Doctrine Command (TRADOC) and its installations; 30 of 43 managers from the U.S. Army Materiel Command and its (1) Industrial Operations Command and the Army arsenals and depots responsible for maintenance and manufacturing support and (2) seven systems commands responsible for the research, development, test, and evaluation (RDT&E) and procurement of Army systems, such as aviation, missiles, communications, and electronics; 32 of 59 managers from the U.S. Army Corps of Engineers and its 29 of 31 managers from other Army commands, including the Criminal Investigation Command, Military Entrance Processing Command, Medical Command, and Military District of Washington and their installations. The 14 ASA(FM&C) respondents performed roles involving financial operations, financial management/accounting policy, and/or budget execution. The 219 respondents from major commands and installations included 132 resource managers, 26 deputy resource managers, 60 budget officers, and 1 working capital fund manager—the last being from the Industrial Operations Command. Of the 233 respondents, 27 percent were military officers. The 63 officers served mainly as resource managers at major commands and installations, and the 170 civilians served most often in resource manager and budget officer positions at installations. Table 2 provides a breakout of the officers and civilians by rank and grade, respectively. Of the 233 respondents, over 90 percent (including the 63 officers and 148 of 170 civilians) reported holding bachelor’s degrees, and about 57 percent (53 officers and 79 civilians) reported holding master’s degrees. One of the respondents also reported holding a doctoral degree. Of the 211 respondents holding bachelor’s degrees, 17 reported more than one major. A review of the profiles showed that 69 managers, or about one-third of the 211 respondents, reported accounting majors, 85 managers reported one or more other business-related majors, and 68 managers reported that one or more of their majors were not business related. Table 3 shows the bachelor’s degree majors reported by the 211 Army financial managers. Of the 132 respondents holding master’s degrees, 17 reported more than one major. A review of the profiles showed that, of these 132 managers, 5 reported accounting majors, 99 reported one or more other business-related majors, and 41 reported one or more nonbusiness-related majors. Table 4 shows the master’s degree majors reported by the 132 respondents. One civilian also reported holding a doctoral degree in public administration. The key financial managers were also requested to provide information on the number of accounting-related subjects completed as part of their formal education. Of the 233 respondents, 207 reported completing one or more of these subjects, as follows: 1-2 subjects: 32 (6 officers and 26 civilians), 3-5 subjects: 51 (19 officers and 32 civilians), and 6 or more subjects: 124 (28 officers and 96 civilians). Included in this latter group were 119 managers (or 51 percent of the respondents) who reported completing both principles of accounting and intermediate accounting along with at least four other subjects. Based solely on a review of their formal education, these 119 managers appear to have met the requirements to serve in federal GS-510 accountantpositions. A review of the profiles showed that the 63 officers’ careers ranged from 10 to 31 years, averaging 23 years, while the 170 civilians’ careers ranged from 15 to 42 years, averaging 27 years. Both officer and civilian respondents, with few exceptions, had spent most of their careers in DOD. Also, about 42 percent of all respondents, officers and civilians, reported performing several financial management-related functions during their careers. Figures 1 and 2 show the average number of years of work experience by rank for the officers and by grade for the civilians, respectively. In collaboration with Army officials, we identified four functions and associated tasks which are often performed by personnel serving in key financial management positions, including financial statement preparation/financial reporting/accounting policy—preparing annual financial statements and footnotes and any interim financial reports, as well as those advising the preparers in their application of accounting policies; financial analysis—performing tasks associated with cost accounting, business process improvements, budgeting, cash flow analysis, cost analysis, revenue and expenditure forecasting, and other analysis of financial position and operations; accounting operations—recording and reporting accounting transactions; accounting systems development and maintenance—performing tasks associated with functional design and maintenance of accounting and finance systems. Seventeen officers and 49 civilians (or about 30 percent of each group) reported that they had performed three or more of these functions during their careers. Figures 3 and 4 show, for the officers and civilians responding to this survey, which of these functions they had performed at sometime during their careers, and the average number of years of experience in each function. For example, as shown in figure 3, 50 of the 63 officers had performed financial analysis-related tasks for an average of 7 years. During 1995 and 1996, about 56 percent of the officers and 75 percent of the civilians reported completing some training in one or more of the categories included in our review. Of these 163 respondents (35 officers and 128 civilians) receiving training, (1) about 90 percent listed general topics, such as computers and supervision, as examples of the training they had completed, (2) about 50 percent reported completing training in financial-related topics, and (3) about 25 percent reported completing training in accounting-related topics, such as accounting standards and financial reporting. Also, a review of the profiles showed that 76 managers completed only general training and 70 other managers had not completed any training. Therefore, almost 63 percent of the 233 respondents had not received any accounting or financial training for those 2 years. Figure 5 shows the training reported as being completed by the 233 respondents during the 2-year period. A review of the profiles showed total receiving accounting-related training: 40 (10 officers and 30 civilians), total receiving financial-related training: 75 (21 officers and 54 civilians), total receiving training in general topics: 143 (26 officers and 117 civilians), and total not receiving training: 70 (28 officers and 42 civilians). Almost 20 percent of the 233 respondents reported holding one or more professional certifications. A review of the profiles showed that, of these 46 managers, 11 civilians were CPAs, 37 were CGFMs (6 officers and 31 civilians), 2 civilians held other financial management-related certifications, including the Certified Cost Estimator/Analyst and Certified Internal Auditor, and 3 civilians reported nonfinancial management-related certifications. Of the 187 managers that did not hold any professional certifications, 57 were officers and 130 were civilians. Figure 6 shows the types of certifications reported by the 233 Army financial managers. Appendixes II through VIII provide the formal education, professional work experience, training, and professional certification data for the 63 officers and 170 civilians by their respective organizations, including: ASA(FM&C) in appendix II; 8 operational commands and 50 of their 57 installations in appendix III; the U.S. Army Training and Doctrine Command and its 19 installations in appendix IV; the U.S. Army Materiel Command (AMC) and its Industrial Operations Command, and 8 of the 14 arsenals and depots in appendix V; AMC and 6 of the 7 systems commands in appendix VI; the U.S. Army Corps of Engineers and 29 of its 55 installations in appendix VII; and 4 other Army commands and 16 of their 18 installations in appendix VIII. In commenting on a draft of this report, the Army generally concurred with the contents and stated that it believed the information will be beneficial in its Army-wide Financial Management Redesign implementation. The Army’s comments are reprinted in appendix IX. Also, the Army provided a number of technical comments, which were fully addressed in finalizing our report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs; the House Committee on Government Reform and Oversight; and the Subcommittee on Government Management, Information, and Technology of the House Committee on Government Reform and Oversight; the Secretary of Defense; and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. If you have any questions about this report, please contact me at (202) 512-9095. Major contributors to this report are listed in appendix X. In collaboration with Army officials, we identified Army senior executives and financial managers to be included in this review as those serving in key positions throughout the department. The four senior executives in the Office of the Assistant Secretary of the Army (Financial Management and Comptroller) ASA(FM&C) included the Assistant Secretary of the Army (Financial Management and Comptroller), the Principal Deputy Assistant Secretary of the Army (Financial Management and Comptroller), the Deputy Assistant Secretary of the Army for Financial Operations, and the Deputy Assistant Secretary of the Army for Budget. The 301 key financial management positions selected for this review included: 14 from ASA(FM&C) involved in financial operations, financial management/accounting policy, and/or budget execution-related functions and 287 (including resource managers, deputy resource managers, budget officers, and working capital fund managers) from 186 major commands and installations involved in (1) operations, (2) training, (3) maintenance and manufacturing, (4) research, development, test, evaluation, and procurement of Army systems, such as aviation, missiles, communications, and electronics, (5) engineering services for DOD and other entities, and (6) criminal investigation, processing of new enlisted personnel, medical services, and support functions for the Washington, D.C., area military organizations. In addition to the 4 senior executives, 233 financial managers located at 145 of the 187 organizations responded to this review. The 233 respondents included the 14 ASA(FM&C) managers and 219 managers from major commands and installations comprised of 132 resource managers, 26 deputy resource managers, 60 budget officers, and 1 working capital fund manager. Table I.1 identifies the Army major commands, the number of their installations, and information on the key financial managers included in this review. U.S. Army Forces Command and 12 of its 16 installations (27 of the 35 managers responding included 13 resource managers, 3 deputy resource managers, and 11 budget officers) U.S. Army Europe and its eight installations (11 of the 11 managers responding included 9 resource managers and 2 deputy resource managers) Eighth U.S. Army and 9 of its 10 installations (15 of the 16 managers responding included 9 resource managers, 1 deputy resource manager, and 5 budget officers) U.S. Army Pacific and its five installations (8 of the 13 managers responding included 6 resource managers and 2 deputy resource managers) U.S. Army South (one of the three managers responding included one resource manager) U.S. Army Military Traffic Management Command and two of its four installations (3 of the 10 managers responding included 1 resource manager, 1 deputy resource manager, and 1 budget officer) U.S. Army Space and Strategic Defense Command and its two installations (six of the six managers responding included three resource managers, one deputy resource manager, and two budget officers) U.S. Army Intelligence and Security Command and its 12 installations (14 of the 14 managers responding included 10 resource managers, 1 deputy resource manager, and 3 budget officers) U.S. Army Training and Doctrine Command and its 19 installations (43 of the 46 managers responding included 18 resource managers, 7 deputy resource managers, and 18 budget officers) U.S. Army Materiel Command and its Industrial Operations Command and 8 of 14 arsenals and depots (16 of the 26 managers responding included 9 resource managers, 1 deputy resource manager, 5 budget officers, and 1 working capital fund manager) (continued) U.S. Army Materiel Command and six of its seven systems commands shown below (16 of the 20 managers responding included 5 resource managers, 5 deputy resource managers, and 6 budget officers) — U.S. Army Test and Evaluation Command — U.S. Army Simulation Training and Instrumentation Command — U.S. Army Chemical and Biological Defense Command — U.S. Army Aviation and Troop Command — U.S. Army Missile Command — U.S. Army Soldier Systems Command — U.S. Army Communications and Electronics Command U.S. Army Corps of Engineers and 29 of its 55 installations (32 of the 59 managers responding included 30 resource managers, 1 deputy resource manager, and 1 budget officer) U.S. Army Criminal Investigation Command and its three installations (five of the five managers included two resource managers and three budget officers) Military Entrance Processing Command (three of the three managers responding included one resource manager, one deputy resource manager, and one budget officer) U.S. Army Medical Command and 9 of its 10 installations (11 of the 12 managers responding included 10 resource managers and 1 budget officer) U.S. Army Military District of Washington and four of its five installations (10 of the 11 managers responding included 5 resource managers, 1 deputy resource manager, and 4 budget officers) We obtained fiscal year 1997 Army budget data, including operation and maintenance (O&M); research, development, test, evaluation (RDT&E); and procurement funding from the ASA(FM&C) budget office. Those major commands and installations identified for this review managed $24 billion of the $64 billion Army budget during fiscal year 1997. In an August 1988 report, GAO proposed a framework for evaluating the quality of the federal workforce over time. Quantifiable measures identified in that report include specific knowledge, skills, and abilities. Using this report and the JFMIP study on core competencies, and in collaboration with DOD representatives, we identified four indicators to measure the attributes that key financial managers can bring to their positions. These indicators included formal education, professional work experience, training, and professional certifications. These attributes are being used to measure the qualifications and experience of key financial managers in the five DOD organizations included in our reviews. We then worked with Army officials in developing a data collection instrument to gather the following types of information under each indicator: Formal education: degrees attained, academic majors, and specific accounting and financial-related courses completed. Professional work experience: (1) number of years working in current position, years at DOD, years in other government agencies, and years in the private sector and (2) experience in four specific financial management-related functions. Training (referred to as continuing professional education in the profile instrument): for the period of 1995-1996, specific subjects completed related to accounting, other financial-related topics, and general topics. Professional certifications: CPA, CGFM, other financial certifications, and other nonfinancial management certifications held. For the four Army executives, we obtained information on their formal education, careers, and professional certifications from biographies and profile instruments provided by these officials. For all other individuals, due to Army officials’ concerns over the completeness of personnel files and databases, we also agreed to collect information on the four indicators using profile instruments. This procedure is being used for collecting qualification and experience information from all DOD organizations included in this series of assignments. Since the Army chose to maintain the anonymity of its respondents, our Army liaisons sent profile instruments to the four Army executives and other key financial managers in the Office of the Assistant Secretary of the Army (Financial Management and Comptroller). The liaisons also sent profile instruments to points of contact at each major command, who, in turn, distributed the profile instruments to those key financial managers identified for this review at their respective commands and installations. The liaisons conducted additional follow-up efforts to contact those financial managers who did not initially respond as well as those respondents whose profile instruments were returned with incomplete information. Through these efforts, we received complete profile information from the four Army executives and 77 percent of the key financial managers identified for this review. Figure I.1 contains the profile instrument we used to obtain personnel qualification and experience information from the key financial managers. As agreed with the Army, we did not attempt to verify the information contained in the profiles we received. However, as noted above, for incomplete profile instruments, the Army liaisons conducted follow-up efforts and obtained the missing information. We conducted our work from March through December 1997 in accordance with generally accepted government auditing standards. We included 14 key financial managers from the Office of the Assistant Secretary of the Army (Financial Management and Comptroller) (ASA(FM&C)), all of whom provided information on their qualifications and experience. This population included four managers involved in financial operations, one staff in financial management/accounting policy, and nine staff in budget execution functions. Table II.1 shows the officer and civilian composition of these managers, by rank and grade, respectively. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table II.2, all 14 respondents held bachelor’s degrees. Two of the 14 managers majored in accounting, while 4 managers reported other business-related majors. As shown in table II.3, 13 respondents also held master’s degrees, with 1 reporting more than one major. One manager held a master’s degree in accounting and seven managers listed other business-related majors. Twelve respondents reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 4 civilians, 3-5 subjects: 3 (2 officers and 1 civilian), and 6 or more subjects: 5 (1 officer and 4 civilians). Based solely on a review of their formal education, all five respondents in the latter group appear to have met the requirements to serve in GS-510 accountant positions. A review of the profiles showed that the average number of years of professional work experience was 27 years for the 4 officers, with a range of 25 to 30 years, and 25 years for the 10 civilians, with a range of 18 to 35 years. With one exception, the respondents had spent most of their careers in DOD. Figures II.1 and II.2 show the work experience by rank for the officers and by grade for the civilians, respectively. B G() Con() Managers by rank (number at each rank) S Execves () GS- () GS- () Managers by grade (number at each grade) Figures II.3 and II.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined at sometime during their careers, and the average number of years of experience in each function. Financial analysis was the function performed most frequently, and as noted in figure II.3, was the only function performed by officers. A review of the 10 civilians’ profiles also showed that 4 had performed three or more of these functions. Number of managers performing function (total = 4 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Number of managers performing function (total = 10 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Figure II.5 shows the training reported by the 14 respondents as being completed during 1995 and 1996. Number of managers (total = 14 respondents) A review of the profiles showed total receiving accounting-related training: 4 (1 officer and 3 civilians), total receiving financial-related training: 4 civilians, total receiving training in general topics: 8 (1 officer and 7 civilians), and total not receiving training: 5 (3 officers and 2 civilians). A review of the profiles showed that, of the four civilians reporting one or more professional certifications, one was a CPA, one held an other financial management certification, and one held a nonfinancial management certification. Of the 10 managers that did not hold any professional certifications, 4 were officers and 6 were civilians. Figure II.6 shows the types of professional certifications reported by the ASA(FM&C) financial managers. Number of managers (total = 14 respondents) The eight Army operational commands included in this review, shown below, managed O&M budgets totaling $8.98 billion during fiscal year 1997: U.S. Army Forces Command, U.S. Army Europe, Eighth U.S. Army, U.S. Army Pacific, U.S. Army South, U.S. Army Military Traffic Management Command, U.S. Army Space and Strategic Defense Command (renamed U.S. Army Space and Missile Defense Command during this review), and U.S. Army Intelligence and Security Command. Surveys were distributed to 108 financial managers; 85 responded, representing all eight operational commands and 50 of their 57 installations. Table III.1 shows the number of installations responding by major command, the number of key financial managers surveyed within each command, and the number responding to this review. The table also shows the O&M funding budgeted for fiscal year 1997 for each major command. O&M budgets (in billions) U.S. Army Forces Command (12) U.S. Army Europe (8) Eighth U.S. Army (9) U.S. Army Pacific (5) U.S. Army Military Traffic Management Command (2) U.S. Army Space and Strategic Defense Command (2) U.S. Army Intelligence and Security Command (12) Total (50) The 85 respondents included 52 resource managers, 11 deputy resource managers, and 22 budget officers. Table III.2 shows the officer and civilian composition of the respondents, by rank and grade, respectively. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table III.3, 73 of the 85 respondents held bachelor’s degrees, with 8 reporting more than one major. The major for 21 of these managers was accounting, while 33 managers reported 34 other business-related majors. As shown in table III.4, 41 respondents also held master’s degrees, with 3 reporting more than one major. One manager held a master’s degree in accounting, while 32 managers reported 33 other business-related majors. Seventy-three of the 85 respondents reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 13 (3 officers and 10 civilians), 3-5 subjects: 16 (7 officers and 9 civilians), and 6 or more subjects: 44 (13 officers and 31 civilians). Based solely on a review of their formal education, the 13 officers and 30 of the 31 civilians in the latter group appear to have met the requirements to serve in GS-510 accountant positions. A review of the profiles showed that the average number of years of professional work experience was 21 years for the 27 officers, with a range of 10 to 31 years, and 28 years for the 58 civilians, with a range of 18 to 42 years. With few exceptions, the respondents had spent most of their careers in DOD. Figures III.1 and III.2 show the work experience by rank for the officers and by grade for the civilians, respectively. B G() Con() () M () C () Managers by rank (number at each rank) S Execves () GS- () GS- () GS- () GS- () GS- () Managers by grade (number at each grade) Figures III.3 and III.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined at sometime during their careers, and the average number of years of experience in each function. Financial analysis was the function performed most frequently. A review of the profiles also showed that 8 officers and 18 civilians had performed three or more of these functions. Number of managers performing function (total = 27 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Number of managers performing function (total = 58 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Figure III.5 shows the training reported by the 85 respondents as being completed during 1995 and 1996. Number of managers (total = 85 respondents) A review of the profiles showed total receiving accounting-related training: 13 (3 officers and 10 civilians), total receiving financial-related training: 26 (9 officers and 17 civilians), total receiving training in general topics: 46 (10 officers and 36 civilians), and total not receiving training: 29 (12 officers and 17 civilians). A review of the profiles showed that, of the 10 managers reporting one or more professional certifications, 2 civilians were CPAs, 8 were CGFMs (1 officer and 7 civilians), and 1 civilian held a nonfinancial management certification. Of the 75 managers that did not hold any professional certifications, 26 were officers and 49 were civilians. Figure III.6 shows the types of professional certifications reported by the operational command and installation financial managers. Number of managers (total = 85 respondents) TRADOC managed an O&M budget of $2.3 billion for fiscal year 1997. Forty-three of the 46 key financial managers from TRADOC (representing its 19 installations) provided information on their qualifications and experience, including 18 resource managers, 7 deputy resource managers, and 18 budget officers. Table IV.1 shows the officer and civilian composition of the respondents by rank and grade, respectively. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table IV.2, 36 respondents held bachelor’s degrees, with 1 reporting more than one major. The major for 13 of these managers was accounting, while 17 managers reported other business-related majors. As shown in table IV.3, 28 respondents also held master’s degrees, with 2 reporting more than one major. The major for 2 of these managers was accounting, while 20 managers reported other business-related majors. Thirty-eight of the 43 respondents reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 6 (1 officer and 5 civilians), 3-5 subjects: 10 (4 officers and 6 civilians), and 6 or more subjects: 22 (6 officers and 16 civilians). Based solely on a review of their formal education, 5 of the 6 officers and the 16 civilians in the latter group appear to have met the requirements to serve in GS-510 accountant positions. A review of the profiles showed that the average number of years of professional work experience was 23 years for the 13 officers, with a range of 16 to 31 years, and 26 years for the 30 civilians, with a range of 15 to 38 years. With one exception, the respondents had spent most of their careers in DOD. Figures IV.1 and IV.2 show the average number of years of work experience by rank for the officers and by grade for the civilians, respectively. B G() Con() () M () Managers by rank (number at each rank) GS-15 (3) GS-14 (8) GS-13 (14) GS-12 (5) Managers by grade (number at each grade) Figures IV.3 and IV.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined at sometime during their careers, and the average number of years of experience in each function. The financial management function performed most frequently was financial analysis. A review of the profiles also showed that two officers and six civilians had performed three or more of these functions. Number of managers performing function (total = 13 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Number of managers performing function (total = 30 respondents) F /pong & acc policy ( yea) ( yea) opon ( yea) ( yea) Figure IV.5 shows the training reported by the 43 respondents as being completed during 1995 and 1996. Number of managers (total = 43 respondents) A review of the profiles showed total receiving accounting-related training: 4 (1 officer and 3 civilians), total receiving financial-related training: 14 (4 officers and 10 civilians), total receiving training in general topics: 26 (5 officers and 21 civilians), and total not receiving training: 11 (5 officers and 6 civilians). A review of the profiles showed that, of the seven managers reporting one or more professional certifications, three civilians were CPAs, four were CGFMs (one officer and three civilians), and one civilian held another financial management certification. Of the 36 managers that did not hold professional certifications, 12 were officers and 24 were civilians. Figure IV.6 shows the types of professional certifications reported by the Training and Doctrine Command and installation financial managers. Number of managers (total = 43 respondents) The 14 arsenals and depots within the U.S. Army Materiel Command’s (AMC) Industrial Operations Command (IOC) managed a fiscal year 1997 budget of $7.4 billion, derived in part from their customers’ O&M accounts. Sixteen of 26 key financial managers at AMC, IOC, and the arsenals and depots provided information on their qualifications and experience. The 16 respondents included 9 resource managers, 1 deputy resource manager, 5 budget officers, and 1 working capital fund manager. Table V.1 provides the rank of the officer and grades of the 15 civilians. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table V.2, all of the 16 respondents held bachelor’s degrees, with 3 reporting more than one major. Eight managers majored in accounting, while two managers reported other business-related majors. As shown in table V.3, eight respondents also held master’s degrees, with five reporting more than one major. All of the eight managers reported other business-related majors. Fifteen of the 16 respondents reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 1 civilian, 3-5 subjects: 4 (1 officer and 3 civilians), and 6 or more subjects: 10 civilians. Based solely on a review of their formal education, the respondents in the latter group appear to have met the requirements to serve in GS-510 accountant positions. A review of the profiles showed that the officer had 26 years of professional work experience, while the 15 civilians’ experience averaged 26 years, with a range of from 17 to 37 years. With one exception, the respondents had spent most of their careers in DOD. Figures V.1 and V.2 show the average number of years of work experience by rank for the officer and by grade for the civilians, respectively. Colonel (1) Managers by rank (number at each rank) GS-15 (3) GS-14 (5) GS-13 (7) Managers by grade (number at each grade) Figures V.3 and V.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined at sometime during their careers, and the average number of years of experience in each function. The financial management function performed most frequently was financial analysis. A review of the profiles also showed that the officer and three civilians had performed three or more of these functions. Number of managers performing function (total = 1 respondent) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Number of managers performing function (total = 15 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Figure V.5 shows the training reported by the 16 respondents as being completed during 1995 and 1996. Number of managers (total = 16 respondents) A review of the profiles showed total receiving accounting-related training: 2 civilians, total receiving financial-related training: 3 civilians, total receiving training in general topics: 11 civilians, and total not receiving training: 5 (1 officer and 4 civilians). None of the 16 respondents held professional certifications. In addition to the arsenals and depots, the U.S. Army Materiel Command (AMC) also has oversight of systems commands. The seven systems commands, shown below, managed O&M, RDT&E, and procurement budgets totaling $3.88 billion during fiscal year 1997: U.S. Army Test and Evaluation Command, U.S. Army Simulation Training and Instrumentation Command, U.S. Army Chemical and Biological Defense Command, U.S. Army Aviation and Troop Command, U.S. Army Missile Command, U.S. Army Soldier Systems Command, and U.S. Army Communications and Electronics Command. The 16 key financial managers at AMC and these commands provided information on their qualifications and experience, including five resource managers, five deputy resource managers, and six budget officers. Table VI.1 shows the officer and civilian composition of the respondents by rank and grade, respectively. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table VI.2, all 16 respondents held bachelor’s degrees, with 1 reporting more than one major. Four of these managers majored in accounting, while five managers reported six other business-related majors. As shown in table VI.3, 11 respondents also held master’s degrees, with 2 reporting more than one major. The majors for six of these managers were business related. Fourteen of the 16 respondents reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 2 civilians, 3-5 subjects: 7 (1 officer and 6 civilians), and 6 or more subjects: 5 (1 officer and 4 civilians). Based solely on a review of their formal education, the respondents in the latter group appear to have met the requirements to serve in GS-510 accountant positions. A review of the profiles showed that the average years of professional work experience was 25 years for the 4 officers, with a range of 24 to 26 years, and 26 years for the 12 civilians, with a range of 16 to 38 years. The respondents had spent most of their careers in DOD. Figures VI.1 and VI.2 show the average number of years of work experience by rank for the officers and by grade for the civilians, respectively. Colonel (4) Managers by rank (number at each rank) GS-15 (12) Managers by grade (number at each grade) Figures VI.3 and VI.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined at sometime during their careers, and the average number of years of experience in each function. The financial management function performed most frequently was financial analysis. A review of the profiles also showed that two officers and three civilians had performed three or more of these functions. Number of managers performing function (total = 4 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Number of managers performing function (total = 12 respondents) yea) yea) opon ( yea) ( yea) Figure VI.5 shows the training reported by the 16 respondents as being completed during 1995 and 1996. Number of managers (total = 16 respondents) A review of the profiles showed total receiving accounting-related training: 3 (1 officer and 2 civilians), total receiving financial-related training: 7 (1 officer and 6 civilians), total receiving training in general topics: 10 (1 officer and 9 civilians), and total not receiving training: 6 (3 officers and 3 civilians). A review of the profiles showed that the two managers reporting professional certifications were CGFMs—one officer and one civilian. The 14 managers that did not hold any professional certifications included 3 officers and 11 civilians. Figure VI.6 shows the types of professional certifications reported by the systems command financial managers. Number of managers (total = 16 respondents) The U.S. Army Corps of Engineers managed O&M, RDT&E, and procurement budgets totaling $715 million during fiscal year 1997. Thirty-two of the 59 key financial managers (representing headquarters and 29 of its 55 offices) provided information on their qualifications and experience, including 30 resource managers, 1 deputy resource manager, and 1 budget officer. Table VII.1 shows the officer and civilian composition of the respondents by rank and grade, respectively. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table VII.2, all 32 respondents held bachelor’s degrees, with 2 reporting more than one major. Nineteen managers majored in accounting, while 9 managers reported other business-related majors. As shown in table VII.3, 14 respondents also held master’s degrees, with 2 reporting more than one major. Of the 14 managers, 1 majored in accounting and 11 reported other business-related majors. Thirty of the 32 respondents reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 1 civilian, 3-5 subjects: 3 civilians, and 6 or more subjects: 26 (1 officer and 25 civilians). Based solely on a review of their formal education, the officer and 22 of the 25 civilians in the latter group appear to have met the requirements to serve in GS-510 accountant positions. A review of the profiles showed that the officer had 26 years of professional work experience, while the average was 27 years for the 31 civilians, with a range of 16 to 40 years. The respondents had spent most of their careers in DOD. Figures VII.1 and VII.2 show the work experience by rank for the officer and by grade for the civilians, respectively. Colonel (1) Managers by rank (number at each rank) S Execves () GS- () GS- () GS- () Managers by grade (number at each grade) Figures VII.3 and VII.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined at sometime during their careers, and the average number of years of experience in each function. The financial management function performed most frequently was financial analysis. A review of the profiles also showed that 11 civilians had performed three or more of these functions. Number of managers performing function (total = 1 respondent) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Number of managers performing function (total = 31 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Figure VII.5 shows the training reported by the 32 respondents as being completed during 1995 and 1996. Number of managers (total = 32 respondents) A review of the profiles showed total receiving accounting-related training: 11 civilians, total receiving financial-related training: 11 civilians, total receiving training in general topics: 23 civilians, and total not receiving training: 7 (1 officer and 6 civilians). A review of the profiles showed that, of the 17 civilians reporting one or more professional certifications, 5 were CPAs and 14 were CGFMs. Of the 15 managers that did not hold any professional certifications, 1 was an officer and 14 were civilians. Figure VII.6 shows the types of professional certifications reported by the Corps of Engineers financial managers. Number of managers (total = 32 respondents) The other Army organizations included in this review, shown below, managed O&M, RDT&E, and procurement budgets totaling $945 million during fiscal year 1997: U.S. Army Criminal Investigation Command, Military Entrance Processing Command, U.S. Army Medical Command, and U.S. Army Military District of Washington. The 29 key financial managers at these commands and their installations provided information on their qualifications and experience, including 18 resource managers, 2 deputy resource managers, and 9 budget officers. Table VIII.1 provides the ranks of the 14 officers and grades of the 15 civilians. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table VIII.2, 26 respondents held bachelor’s degrees, with 2 reporting more than one major. Three of these managers majored in accounting, while 15 managers reported other business-related majors. As shown in table VIII.3, 19 respondents also held master’s degrees, with 4 reporting more than one major. One or more of the majors reported by 17 of these 19 managers were business related. Twenty-seven of the 29 respondents reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 5 (2 officers and 3 civilians), 3-5 subjects: 9 (5 officers and 4 civilians), and 6 or more subjects: 13 (6 officers and 7 civilians). Based solely on a review of their formal education, the respondents in the latter group appear to have met the requirements to serve in GS-510 accountant positions. A review of the profiles showed that the average number of years of professional work experience was 23 years for the 14 officers, with a range of 19 to 27 years, and 26 years for the 15 civilians, with a range of 19 to 40 years. The respondents had spent most of their careers in DOD. Figures VIII.1 and VIII.2 show the work experience by rank for the officers and by grade for the civilians, respectively. Con() () Managers by rank (number at each rank) GS-14 (6) GS-13 (5) GS-12 (4) Managers by grade (number at each grade) Figures VIII.3 and VIII.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined at sometime during their careers, and the average number of years of experience in each function. The financial management function performed most frequently was financial analysis. A review of the profiles also showed that five officers and four civilians had performed three or more of these functions. Number of managers performing function (total = 14 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Average years performing functions Number of managers performing function (total = 15 respondents) F /pong & acc policy ( yea) yea) opon ( yea) ( yea) Figure VIII.5 shows the training reported by the 29 respondents as being completed during 1995 and 1996. Number of managers (total = 29 respondents) A review of the profiles showed total receiving accounting-related training: 5 (4 officers and 1 civilian), total receiving financial-related training: 12 (7 officers and 5 civilians), total receiving training in general topics: 20 (9 officers and 11 civilians), and total not receiving training: 8 (4 officers and 4 civilians). A review of the profiles showed that, of the six managers reporting professional certifications, five were CGFMs (three officers and two civilians) and one civilian held a nonfinancial management-related certification. Of the 23 managers that did not hold any professional certifications, 11 were officers and 12 were civilians. Figure VIII.6 shows the types of professional certifications reported by the other Army organizations’ financial managers. Number of managers (total = 29 respondents) George H. Stalcup, Associate Director Geoffrey B. Frank, Assistant Director Robert L. Self, Project Manager Jan E. Bogus, Auditor-in-Charge Linda J. Brigham, Senior Auditor Patricia A. Summers, Senior Auditor Dennis B. Fauber, Senior Evaluator Francine M. DelVecchio, Communications Analyst Michelle A. Howard, Intern The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on the qualifications, including formal education, professional work experience, training, and professional certifications of personnel serving in key financial management positions in the Army. GAO noted that: (1) the four Army financial management executives included in its review are the Assistant Secretary of the Army (Financial Management and Comptroller), the Principal Deputy Assistant Secretary of the Army (Financial Management and Comptroller), the Deputy Assistant Secretary of the Army for Financial Operations, and the Deputy Assistant Secretary of the Army for Budget; (2) each of the executives had attained master's degrees; (3) the Assistant Secretary had spent 30 years at the Department of Defense (DOD); (4) the Deputy Assistant Secretaries had DOD careers ranging from 29 to 38 years, with one of the three also spending part of his career in the private sector; (5) two of the executives held certifications in government financial management; and (6) of the 233 other key Army financial managers responding to GAO's review: (a) about 27 percent (63) were miliary officers, serving mainly as resource managers and budget officers at major commands and installations; and 73 percent (170) were civilian personnel serving mainly in resource manager and budget officer positions at installations; (b) all 63 officers and 148 of the 170 civilians reported holding bachelor's degrees, with 17 of these respondents reporting more than one major; (c) about one-third of these 211 managers majored in accounting, while approximately 40 percent reported degrees in business-related majors other than accounting; (d) 132 respondents (53 officers and 79 civilians) also reported holding advanced degrees, with 17 of these respondents reporting more than one major; (e) five of the 132 managers held master's degrees in accounting, while about 75 percent reported degrees in business-related majors other than accounting; (f) the officers' careers ranged from 10 to 31 years, averaging 23 years, while civilians' careers ranged from 15 to 42 years, averaging 27 years; (g) 163 respondents reported completing training in one or more of the categories included in GAO's review during 1995 and 1996; (h) about 20 percent of the 233 respondents reported holding one or more professional certifications; and (i) of the 46 managers in this group, 44 reported holding accounting and other financial management-related certifications, as follows: 11 were Certified Public Accountants, 37 were Certified Government Financial Managers, and 2 held other certifications, including the Certified Cost Estimator/Analyst and Certified Internal Auditor.
In preparation for the initiation of Operation Iraqi Freedom (OIF), the United States deployed four Army divisions and their supporting reserve component units, a Marine Expeditionary Force, and a significant portion of the Navy’s and Air Force’s combat power to southwest Asia. The Army’s Third Infantry Division (mechanized) and its supporting reserve component units were primarily equipped with Army-prepositioned assets—consisting of over 17,000 pieces of rolling stock and almost 6,000 standard 20-foot shipping containers of supplies—drawn from prepositioned equipment sites located in southwest Asia and offloaded from Army prepositioned ships. The 1st Marine Expeditionary Force was also primarily equipped with prepositioned assets offloaded from Marine Corps prepositioned ships. By the height of the combat portion of OIF, the United States had deployed a significant portion of its combat power to southwest Asia. For example, during fiscal year 2003, the Army deployed four divisions and numerous supporting active, reserve, and national guard units which participated in the combat phase of OIF. As a result, the Army estimated that these units, with assistance from other Army maintenance activities, would have to reconstitute over 53,000 pieces of rolling stock when the units redeployed to their home stations. In Iraq and Afghanistan, the U.S. military—especially the Army and Marine Corps—are operating at a pace well in excess of their normal peacetime level, which is driven by units’ training requirements. This not only greatly increases the day-to-day operational maintenance requirements, including spare parts demands, of deployed Army and Marine Corps units, but also generates a large post-operational maintenance requirement that must be addressed when the units redeploy to their home station. Upon redeployment, the units need to bring their equipment at least back up to fully mission-capable status in order for the units to be able to train on their equipment and achieve their readiness levels and be prepared for future deployments. In addition, before leaving Iraq, redeploying units turned in a large amount of prepositioned equipment that must undergo maintenance and repair before the equipment can be reissued to units deploying to or already in southwest Asia in support of OIF, or returned to prepositioned equipment stock for future use. Figure 1 shows some prepositioned equipment stored in Kuwait awaiting repair. Army and Marine Corps units returning from deployments related to the global war on terrorism have equipment that has been heavily used and is in various degrees of disrepair. Upon returning to home station, the units’ equipment is inspected to determine what maintenance is needed to bring the equipment back to the condition needed to allow the unit to conduct mission-essential training and be prepared for future deployments. The services use myriad repair and maintenance sources to assist units in reconstituting their equipment. At military installations, various maintenance personnel—including military personnel within units, installation personnel (including contractors) who support day-to-day maintenance operations, contractors who have been hired to augment the units’ and installations’ day-to-day workforces, and contractors who have been hired to increase an installation’s maintenance capacity—are all working in concert to reconstitute the units’ equipment in a timely manner so that the units will be ready to again deploy. In addition, military depots and contractors are using their vast maintenance and repair capabilities to help in the equipment reconstitution effort. To fund the global war on terrorism in fiscal year 2003, Congress provided DOD with $62 billion in the fiscal year supplemental appropriation, primarily funding operations in Iraq. While most of this funding was used to cover the costs of combat operations, the DFAS monthly terrorist report indicate that about $3.8 billion in funds were obligated for equipment reconstitution in fiscal year 2003. The fiscal year 2004 global war-on-terrorism supplemental budget included a significantly larger amount for equipment reconstitution than the previous supplemental budget. DOD requested $65.6 billion for executing the global war on terrorism in its fiscal year 2004 global war on terrorism supplemental budget request, which Congress funded at $64.3 billion. In the budget-building process that was used for developing its fiscal year 2004 global war-on-terrorism supplemental budget request, the department included $5.9 billion for equipment reconstitution. (See table 1.) For the fiscal year 2004 supplemental budget process, table 1 shows the requirements the services developed and what OSD ultimately included in its supplemental budget submission to Congress. The requirements are broken down between unit-level and depot-level requirements. Unit-level maintenance, which consists of organizational- and intermediate-level maintenance, includes maintenance performed by military units in motor pools and maintenance support units, and by DOD civilians and contractor personnel at installation maintenance organizations. Depot-level maintenance includes maintenance performed by DOD civilian employees and DOD contractors at military depots or private facilities. The fiscal year 2004 supplemental defense appropriation does not delineate the amounts appropriated for unit- and depot-level equipment reconstitution. The two-phased process DOD used to develop its fiscal year 2004 supplemental budget equipment reconstitution requirements contained weaknesses that produced errors that may result in misstatements of future-year budget estimates if not corrected. We observed two problems with the COST model associated with the first phase of DOD’s process that have generated unreliable estimates. First, the COST model can overstate reconstitution costs related to aircraft and ship costs because these costs are covered in both the operations and reconstitution sections of the model. Second, there is uncertainty in DOD over the maintenance requirements covered by the model. We also noted problems with the second phase of DOD’s process. In one instance, the Army did not consider funding in its baseline peacetime operation and maintenance budget that would be available for equipment reconstitution. The Army also significantly overestimated the organizational- and intermediate-level maintenance costs to reconstitute individual equipment items. In another instance, the services included requirements in their reconstitution estimates that appear to be inconsistent with equipment reconstitution activities established by OSD’s supplemental budget preparation guidance. Also, OSD guidance only allowed the services to request funding to replace known battle losses—excluding projected battle losses and other expected losses—in preparing their fiscal year 2004 supplemental budget submissions. The model that OSD and the services used in the first phase of the process to calculate reconstitution requirements for fiscal year 2004 resulted in an overstatement of about $1.2 billion. This is because the COST model contains an error that can result in a duplication of reconstitution cost requirements. The equipment reconstitution section of the COST model provides funding for aircraft and ship reconstitution that is already funded through the operations section of the model. All services support their aircraft through a flying-hour program that covers costs associated with operating aircraft, such as petroleum, oil and lubricants, consumables (supplies), and spare parts. As a result, all organizational- and intermediate-level maintenance and repair requirements are met through the flying-hour program. Since the operations section of the COST model already includes flying-hour program funding, the inclusion of an equation for aircraft reconstitution in the equipment reconstitution section of the model is redundant. Air Force officials told us that due to their flying-hour program, additional funding specifically addressing organizational- and intermediate-level reconstitution of aircraft was not needed. However, the Air Force’s fiscal year 2004 supplemental budget request included about $1.2 billion in the equipment reconstitution section for aircraft maintenance that was also covered by the operations section of the COST model. According to Air Force officials, they were unaware that the equipment reconstitution section duplicated the aircraft maintenance costs that are covered in the operations section of the COST model. In contrast, the Navy removed funding that the COST model’s equipment reconstitution section provided for aircraft organizational- and intermediate-level maintenance. The equipment reconstitution section of the COST model also duplicates organizational-level ship reconstitution that is already covered in the operations section of the model. However, the Navy treated this potential duplication in the same manner as it treated the flying-hour program duplication. Recognizing that this maintenance was already covered in the COST model’s operations section, the Navy only included intermediate- level ship maintenance, and aircraft and ship depot-level maintenance in the equipment reconstitution section of its requirements calculation. The different ways that the Air Force and Navy treated aircraft reconstitution demonstrate how this potential redundancy is a weakness in the structure of OSD’s COST model. In discussing these redundancies in the model with OSD comptroller officials, they told us that OSD and IDA officials were unaware of this potential duplication. This lack of awareness was demonstrated when OSD comptroller officials did not identify and correct the Air Force’s duplication described previously. Air Force finance officials told us that they have taken steps when preparing their fiscal year 2005 supplemental budget to prevent this duplication from occurring again. We observed that the fiscal year 2005 supplemental budget did not repeat this duplication. There was uncertainty within the department regarding which maintenance requirements were covered in the COST model. OSD and the services developed their equipment reconstitution requirements with the understanding that the COST model calculated organizational- and intermediate-level maintenance only, and they thus calculated their depot- level maintenance requirements outside of the model during the second phase of the process. However, we later learned that the model may have calculated some depot-level maintenance requirements, which could have resulted in the model duplicating the depot-level maintenance requirements calculated outside the model. We held a series of discussions with officials from OSD—the owners of the model—and IDA—the developers of the model—to determine what types of maintenance requirements are included in the equipment reconstitution section of the model. Both OSD and IDA officials provided us with different descriptions of what was included in the model. In addition, these officials were unable to produce any written guidance that OSD provided IDA regarding what maintenance requirements the model’s equipment reconstitution section should cover. They stated that this was determined so long ago that the paper trail no longer exists and that in all probability the guidance was transmitted either verbally or via e-mail, or both. For example, in our initial meeting with OSD comptroller officials to determine how OSD and the services developed their equipment reconstitution estimates for the fiscal year 2004 supplemental budget, we were told that the equipment reconstitution section of the COST model included organizational- and intermediate-level maintenance requirements and excluded depot-level maintenance requirements. Officials from each of the services corroborated this viewpoint and stated that they understood the equipment reconstitution section of the COST model to include organizational- and intermediate-level maintenance requirements and exclude depot-level requirements. Army, Navy, Air Force, and Marine Corps finance officials also told us that they developed depot-level maintenance estimates separate from the COST model. Next, we met with IDA officials to better understand what levels of reconstitution maintenance were included in the equipment reconstitution section of the COST model. The IDA officials said that the model included organizational- and intermediate-levels of maintenance, and could include some depot-level maintenance. However, IDA could not provide us with the documentation to support this assertion and we were therefore unable to determine whether model calculations included or excluded depot-level maintenance. We shared IDA’s comments about the levels of maintenance covered in the model with OSD comptroller officials in a follow-up meeting. At this point, they told us that its COST model was intended to provide organizational- level maintenance costs only, which represents a small portion of the total equipment reconstitution requirement. However, based on our analysis of the COST model, we concluded that intermediate-level requirements are indeed included in the COST model equations. In the end, OSD comptroller officials stated that they are now taking steps to clear up confusion regarding the requirements for each type of maintenance and repair. As described previously, OSD comptroller officials did not clearly establish and communicate to IDA what levels of maintenance they expected the COST model to estimate. Without clearly identifying what levels of maintenance OSD expected in the COST model and subsequently clearly communicate this information to IDA, IDA officials are not able to ensure that the COST model generates accurate and complete organizational and intermediate maintenance cost estimates, and does not duplicate depot maintenance costs calculated outside of the model. Consequently, the department cannot have confidence in its equipment reconstitution budget estimate. The Army did not consider funding in its baseline peacetime operation and maintenance budget that would be available for equipment reconstitution when developing its own estimate outside of the COST model. As a result, the Army overestimated its equipment reconstitution requirements. The Army used the COST model as a starting point for determining its equipment reconstitution requirements for fiscal year 2004. However, Army officials concluded that the $1.9 billion calculated by the model was inadequate. As a result, the Army developed its own methodology for calculating equipment reconstitution requirements and estimated $3.0 billion would be needed for organizational- and intermediate-level repair and maintenance, which was about $1 billion higher than the OSD COST model estimate. Our analysis of the Army’s methodology for calculating equipment reconstitution requirements revealed a major weakness in the Army estimate. The Army’s process for estimating its equipment reconstitution requirements did not include steps to offset total requirements with baseline funding. Consequently, we estimate that the Army’s equipment reconstitution estimate may have been overstated by between $299 million to $497 million. OSD’s guidance to the services for developing the supplemental budget specifies that the services were to offset funds already contained in their baseline budgets—peacetime maintenance costs that would not be incurred because the unit was deployed in support of the global war on terrorism—when estimating a contingency operation’s cost. Not recognizing and adjusting for normal operating budgets overstates the funding requirement and could result in the Army overstating funding requirements for equipment reconstitution in the future if not addressed. According to Army officials, this oversight occurred because the Army did not establish a specific step in its supplemental estimating process to offset the Army’s estimate with baseline budget funds. In the process of reviewing the Army’s methodology for calculating equipment reconstitution requirements, we also found that the Army overestimated organizational- and intermediate-level maintenance costs for numerous equipment items. A comparison of the actual fiscal year 2004 equipment reconstitution obligations reported by the Army with the Army’s equipment reconstitution cost estimate showed that organizational- and intermediate-level repair costs for individual equipment items were significantly overestimated. Specifically, we collected actual cost data on 38 types of equipment included in the Army’s estimate and determined that the actual costs for reconstituting the items were lower than the Army’s estimates for 34 of the 38 items. (See table 2.) The Army was unable to provide us with adequate support for its estimates because it did not retain supporting documentation. Consequently, we were not able to determine the reasons for differences between estimated costs and reported equipment reconstitution obligations. The services included costs in their reconstitution estimates calculated outside the COST model that appear not to be equipment reconstitution as established by OSD’s guidance. In one case, the Navy and the Air Force included unfunded fiscal year 2004 depot-level maintenance requirements in their supplemental funding cost estimates that did not arise from OIF and other operations related to the global war on terrorism. The Navy and Air Force used unfunded peacetime depot maintenance requirements as the basis for the depot-level maintenance portion of their fiscal year 2004 supplemental equipment reconstitution requests. The Navy requested funding for unfunded ship overhauls, and according to Atlantic fleet officials, unfunded ship overhauls occur every year. These officials also stated that the Navy typically finds the funding needed to perform unfunded overhauls somewhere in its baseline budget or will delay the overhauls until the following year. In addition, the Air Force’s fiscal year 2004 supplemental request for depot-level equipment reconstitution consisted of funding engine and airframe overhauls that were not funded in its fiscal year 2004 baseline budget. These requirements may not fall within the description of equipment reconstitution as established in OSD’s guidance, which directed the services to request funds needed to restore forces to the same operational level as prior to deployment and to limit requests to those costs already incurred as a direct result of operations in support of the war on terrorism. However, the DOD guidance also instructed the services to prepare their supplemental budget estimates around the DOD Financial Management Regulation, Chapter 23 Contingency Cost Breakdown Structure, which provides a broader description of reconstitution costs. Taken as a whole, the DOD supplemental budget preparation guidance is unclear on what the services could and could not include in their budget submissions. In addition, Air Force and Navy officials said that funding for these requirements was needed to prepare their forces to be fully ready to fight the global war on terrorism. However, including these unfunded depot maintenance peacetime requirements may overstate the estimated cost of reconstituting equipment involved in operations in Iraq and Afghanistan. In preparing the fiscal year 2004 global war-on-terrorism supplemental budget, OSD only allowed the services to request funding to replace known battle losses and excluded projected battle losses and other expected losses. Such expected losses include equipment that would be considered beyond economic repair, such as crash-damaged vehicles and maintenance washouts. However, the replacement of these excluded items ultimately will need to be funded in future budgets to ensure that the Army and Marine Corps have an adequate amount of equipment needed to meet future challenges. The equipment replacement needs being quantified by the services for inclusion in the fiscal year 2005 global war-on-terrorism supplemental budget will incorporate some of the expenses excluded by OSD guidance in the fiscal year 2004 supplemental budget request. However, as of December 2004, the magnitude of this requirement is unknown. Using the Army’s equipment reconstitution requirements analysis, we estimated that the fiscal year 2004 equipment replacement requirement due to maintenance washouts ranged from $259 million to $562 million. Recognizing that not all OIF equipment losses were covered in the fiscal year 2004 supplemental, the Army has included some unknown battle losses, crash losses, and maintenance washouts in its Tactical Wheeled Vehicle Study. The Army engaged this study to identify shortfalls in its tactical wheeled vehicle fleets and also included transformation and underfunded requirements in past baseline budgets. This study will identify multiyear procurement requirements and, according to Army resource officials, it is hoped the Army will have these requirements funded through future baseline and supplemental budgets. As of March 2005, the study has not been issued and the Army has been unable to provide us with an estimate of the procurement funding requirements, including the amount directly related to OIF equipment reconstitution requirements. Furthermore, the Army’s and Marine Corp’s reconstitution requirements for prepositioned equipment still being used in Iraq will also increase because the anticipated battle and crash losses and maintenance washouts will also continue to increase the longer the equipment remains in use. Until OSD allows the services to consider anticipated operational equipment losses and maintenance washouts in their supplemental budgeting process, equipment reconstitution requirements generated during the current fiscal year will inevitably be pushed out for funding in upcoming years. Not doing so could also have an impact on the ability of the services to quickly reconstitute equipment in the current fiscal year. DOD has not accurately tracked and reported its equipment reconstitution costs because the services are unable to segregate equipment reconstitution from other maintenance requirements, as required. In part to provide Congress with information on global war-on-terrorism costs and provide the OSD comptroller with a means to assess variance between obligations and the budget, DFAS compiles a monthly report to track these obligations. The DFAS report on funds obligated in support of the war includes a category for tracking equipment reconstitution obligations and the guidance associated with this report describes what reconstitution costs can include. Our analysis of the DFAS report showed that 1) the Air Force is not separately reporting equipment reconstitution obligations because it does not have a mechanism within its current accounting system to track them, 2) the Army is including unit reconstitution obligations that are above and beyond equipment reconstitution and other maintenance costs, and 3) the Navy is unable to segregate regular maintenance from reconstitution maintenance for ship overhauls. As a result, the equipment reconstitution obligations are being inconsistently reported by the services and the report data are not reliable for accurately determining how much the services are actually obligating for the reconstitution of equipment returning from deployments in support of the global war on terrorism. Table 3 lists the fiscal year 2004 obligations reported by the services in DFAS’s report as of the end of September 2004 for reconstitution. DOD reports the costs of the global war on terrorism largely in accordance with the cost breakdown structure found in its financial management regulation. This internal DOD guidance describes equipment reconstitution costs as including the cost to clean, inspect, maintain, replace, and restore equipment to the required condition at the conclusion of the contingency operation or unit deployment. This guidance, which includes a specific cost category for equipment reconstitution along with specific budget guidance issued by the OSD comptroller’s office that addresses incremental costs of the global war on terrorism, calls for the services to report equipment reconstitution costs separately from other incremental costs. Despite this guidance, the Air Force is not separately reporting equipment reconstitution obligations to DFAS for inclusion in its monthly terrorism cost report owing to the way the Air Force accounting system was designed. Air Force officials told us that the Air Force’s accounting system currently has no way to delineate equipment reconstitution obligations from other global war-on-terrorism obligations for DFAS reporting purposes. Specifically, the Air Force does not have a mechanism nor has it established codes that can track the dollar amounts obligated on equipment reconstitution in fiscal years 2003 and 2004, which the guidance does not require. The Air Force’s accounting system has two types of codes for classifying expenses: (1) element of expense investment codes, which are used for tracking obligations by commodities (such as supplies, travel, and civilian pay); and (2) emergency and special program codes, which are used to collect costs incurred during an emergency or a special program (such as the global war on terrorism). Neither of these codes, individually or in combination, equates to equipment reconstitution. Instead, equipment reconstitution obligations are spread throughout other categories in the DFAS terrorist cost report, as appropriate, for the type of obligations incurred. Thus equipment reconstitution obligations are reported in the operations category of DFAS’s report, not the reconstitution category of the DFAS report, and are mixed with other global war-on- terrorism obligations that are tracked by other cost categories in the DFAS report. According to OSD comptroller officials, having the ability to track actual global war-on-terrorism obligations is important in that it allows them insight into the accuracy of the supplemental budget that was generated, in part, from the COST model. Army and Navy equipment reconstitution obligations may also be inaccurate in the DFAS report. The Army and Navy track their equipment reconstitution obligations through the use of certain codes available in their account encoding structure and may overstate reported equipment reconstitution obligations, because these codes are accumulating global war-on-terrorism and other obligations that were not exclusively for equipment reconstitution. During our review, we found that the Army is including obligations not directly related to equipment reconstitution requirements arising from global war-on-terrorism deployments. For example, units reconstituting after returning from Iraq may be including maintenance obligations generated during training exercises. Army officials stated that these units are being ordered to rapidly prepare for subsequent deployments, which includes reconstituting equipment and engaging in training exercises among various other tasks. According to Army officials, training exercises can require the use of equipment that has not yet been fully reconstituted and the exercises generate additional maintenance requirements not related to global war-on-terrorism equipment reconstitution. However, these training-related maintenance requirements are not readily separable from the equipment reconstitution requirements, and are therefore being included as part of the Army’s reported reconstitution obligations. In addition, some reconstitution obligations do not readily align with the cost categories in the DFAS report and are thus included in existing categories, such as equipment reconstitution. For example, Army budget officials told us that when units return home from a deployment they incur reconstitution expenses unrelated to their equipment. These expenses include training needed to reestablish a unit’s ability to perform its mission and personnel expenses related to the movement of soldiers in and out of the unit that occur after an extensive deployment. The officials told us that these obligations are being included in the equipment reconstitution section because this was the best place to account for them. As a result, the Army’s equipment reconstitution amounts are likely overstated. The Navy’s reported reconstitution obligations include some maintenance costs that did not appear to completely result directly from global war-on- terrorism deployments. Portions of the Navy’s reported obligations for equipment reconstitution include major ship repairs and overhauls, called “availabilities”, which are accomplished at specified time intervals independent of the ship’s use during the global war on terrorism. The specific maintenance tasks performed during an availability depend on the type of ship, the type of repair or overhaul being conducted, and the ship’s condition. According to Navy officials, they report all of the obligations incurred in conjunction with an availability funded by the fiscal year 2004 supplemental budget as equipment reconstitution regardless of whether the obligations are due to conditions that existed prior to a global war-on- terrorism deployment or to peacetime maintenance conducted during all availabilities. The Navy reports all of these obligations as equipment reconstitution, because it does not have a process to capture what obligations for an availability are driven by the higher level of operations generated by global war-on-terrorism deployments versus baseline requirements generated by a ship during deployments not related to the global war on terrorism. While all of the maintenance tasks associated with availabilities that are funded by supplemental money are needed to effectively maintain the condition of Navy ships, not all of the maintenance performed during the availabilities resulted from the ships being deployed in support of the global war on terrorism. Thus, some of the Navy’s reported obligations do not appear to fit DOD’s criteria for equipment reconstitution costs that should be reported as an incremental cost of the global war on terrorism. As the services, especially the Army and Marine Corps, continue to conduct operations related to the global war on terrorism at the current high pace, they will continue to generate equipment reconstitution requirements. OSD and the services will continue to use the department’s two-phased process to develop estimates of these maintenance requirements so they can be funded through supplemental and baseline budgets. However, DOD and the services cannot be assured that its global war-on-terrorism supplemental budget requirements are as reliable and complete as possible until the OSD comptroller: revises its contingency model to ensure that costs covered by the model’s operating tempo cost elements are not duplicated by costs in the model’s reconstitution cost elements; clearly establishes what equipment reconstitution maintenance requirements should be covered by the COST model and communicates this information to IDA to ensure that the model calculations reflect only these maintenance costs; clarifies its guidance to the services on what types of maintenance requirements should and should not be included as equipment reconstitution when developing the supplemental budget; and ensures that all anticipated equipment reconstitution requirements, such as operational losses and maintenance washouts, are considered when developing supplemental budget requests. Overestimating these requirements could result in a misapplication of funds, while underestimating them could require the services to draw funds from baseline programs or result in the inability of the services to fully reconstitute their equipment. Improving DOD’s process for estimating equipment reconstitution maintenance and equipment replacement requirements will aid the services in reducing the risks they face in executing the equipment reconstitution program and help maintain a military that is able to meet the nation’s needs. With global war-on-terrorism operations continuing at a high pace, the Army will be generating additional equipment reconstitution requirements that will be funded through future supplemental budgets. Although OSD did not use the Army’s fiscal year 2004 equipment reconstitution estimate for the fiscal year 2004 supplemental request, that does not preclude OSD from using future Army estimates. Consequently, it is important that the Army appropriately offset its equipment reconstitution estimate with baseline peacetime funding, which it did not do for its fiscal year 2004 estimate. Until the Army establishes a step in its supplemental estimating process to offset the estimate with the baseline budget, its calculation of equipment reconstitution requirements for future supplemental budgets will continue to be overstated. Further, if OSD uses the Army-calculated equipment reconstitution estimate and does not adjust the estimate for baseline funding, the Army’s equipment reconstitution requirements could be overfunded, which could limit the funding available for other requirements, thus potentially increasing risks in other areas. Inconsistencies between how the services are reporting equipment reconstitution obligations in the DFAS global war-on-terrorism cost report mean that equipment reconstitution and other related cost categories are being inaccurately reported. Until DOD develops comprehensive and consistent methods for tracking and reporting equipment reconstitution obligations—including (1) developing a mechanism within the Air Force for identifying, accumulating, and reporting its equipment reconstitution obligations; and (2) refining the Navy and Army processes for identifying obligations that are incurred exclusively for equipment reconstitution—the usefulness of the DFAS report will remain limited. Improving the accuracy and completeness of the report will result in a DFAS cost report that will be more useful to OSD and Congress in their oversight of global war-on- terrorism obligations. To correct the weaknesses we identified in the equipment reconstitution cost estimating process the department used when developing its fiscal year 2004 supplemental budget request, we recommend that the Secretary of Defense take the following five actions: Direct the OSD comptroller to revise its COST model to ensure that costs covered by the model’s operating tempo cost elements are not duplicated by costs in the model’s reconstitution cost elements; Direct the OSD comptroller to clearly establish what equipment maintenance requirements should be covered by the COST model and communicate this information to IDA to ensure that the model calculations reflect only these maintenance costs; Direct the Secretary of the Army to establish a step in its supplemental estimating process to offset the estimate with the baseline budget to improve future contingency funding estimates; Direct the OSD comptroller to clarify its supplemental budget guidance to the services on what types of maintenance requirements should and should not be included as equipment reconstitution when developing the supplemental budget; and Direct the OSD comptroller to ensure that all potential equipment reconstitution requirements are considered when developing supplemental budget requests by allowing the services to include anticipated equipment losses—both operational losses and maintenance washouts—in their supplemental budgeting process. To ensure that Congress has a clear insight into the cost of equipment reconstitution, we also recommend that the Secretary of Defense direct the services, in conjunction with DFAS, to develop comprehensive and consistent methods for tracking and reporting equipment reconstitution obligations. This includes (1) developing a mechanism within the Air Force for identifying, accumulating, and reporting its equipment reconstitution obligations; and (2) refining the Navy and Army processes for identifying obligations that are incurred for equipment reconstitution. The OSD Comptroller, Director for Operations and Personnel provided oral comments on a draft of this report for DOD and concurred with two of our six recommendations, partially concurred with three recommendations, and did not concur with the other recommendation. In concurring with our recommendation that OSD clarify its supplemental budget guidance to the services on what types of maintenance requirements should and should not be included as equipment reconstitution when developing the supplemental budget, the OSD director stated that improvements are made to each iteration of the guidance. We confirmed that the guidance provided to the services for the fiscal year 2005 supplemental budget was much more detailed and comprehensive than the guidance provided for developing the fiscal years 2003 and 2004 supplemental budgets. In concurring with our recommendation that the services, in conjunction with DFAS, develop comprehensive and consistent methods for tracking and reporting equipment reconstitution obligations, the OSD director stated that they have already revised their financial management regulation to improve reporting of equipment reconstitution. However, until additional actions are taken, such as improving the services’ financial systems’ ability to track obligations, our recommendation will not be fully implemented. In partially concurring with our recommendation regarding the duplication of maintenance costs in the COST model, the OSD director stated that they have made revisions to DOD’s financial management regulations to ensure that the cost of equipment maintenance is not being duplicated in different COST model sections. However, as currently written, the revised section does not have any instructions on avoiding duplicating maintenance requirements calculated by the model’s operations section; instead it simply divides reconstitution into four subcategories of maintenance. Until further changes are made, the intent of our recommendation will not have been met. In partially concurring with our recommendation that the OSD comptroller clearly establish what equipment reconstitution maintenance requirements should be covered by the COST model, the OSD director stated that IDA, the model’s operator, periodically receives specific guidance from the comptroller’s office on the criteria and elements of costs to be included in the model’s calculations. However, as we reported, neither OSD comptroller nor IDA officials were able to provide us with examples of this guidance when requested. The OSD director also told us that they have taken action to ensure that the model calculates costs in accordance with the DOD Financial Management Regulation and that they issue guidance to the services on what costs will be covered by the model. Although Volume 12, Chapter 23, Section 3.5 of the financial management regulation has been revised to delineate equipment reconstitution into four categories— organizational-, intermediate-, and depot-level maintenance; and contractor logistics support—the section does not state which of these categories are covered by the COST model. In addition, while OSD guidance to the services for developing the fiscal year 2005 supplemental budget stated that intermediate- and depot-level maintenance would be calculated outside the model, the guidance provided for developing the fiscal years 2003 and 2004 supplemental budgets only specified that depot-level maintenance would be calculated outside the model. Until changes are made establishing what maintenance requirements are in the COST model and clearly communicating this to IDA, the intent of our recommendation will not have been met. The OSD director did not concur with our recommendation that the Secretary of the Army be directed to establish a step in its supplemental budget process for estimating equipment reconstitution requirements to offset the estimate with baseline funding. The OSD director said that the process for developing the supplemental budget already includes a process for excluding costs for equipment maintenance funded in the baseline budget. As stated in this report, we acknowledge that OSD’s COST model equations for equipment reconstitution contain a factor for reducing the reconstitution requirements by taking baseline funding into account. We also acknowledge that OSD’s process for building the supplemental budget has controls in place to help ensure that only incremental costs are included in the supplemental budget. However, we believe that the services’ supplemental budget estimating processes also need to have such assurances built in. We made our recommendation to address the fiscal year 2004 equipment reconstitution requirement that the Army developed separate from the COST model during what we have described in this report as the second phase of DOD’s process for building the supplemental budget. When developing this requirement, the Army did not take into account available baseline funding, because it did not have a step in its supplemental budget estimating process to offset the estimate with baseline funding. If the OSD comptroller had used this Army-generated requirement in its fiscal year 2004 supplemental budget, the potential exists that OSD comptroller officials might miss the Army’s failure to adjust this requirement for baseline funding. Importantly, the very safeguards that the OSD comptroller stated it has in place failed to offset the $1.2 billion duplication of aircraft maintenance requirements that the Air Force included in its fiscal year 2004 supplemental budget requirement. Taking action on our recommendation would provide the Army with the necessary safeguards to submit accurate budget estimates and avoid the potential that future supplemental budgets could provide more than incremental funding to the Army. Therefore, we continue to believe our recommendation has merit. In partially concurring with our recommendation that the services be allowed to include equipment losses in their supplemental budget requirements, the OSD director stated that they typically address these potential future costs through subsequent budget requests or through reprogramming efforts. The OSD director also told us that it is conceivable that some factor reflecting maintenance washout trends could be considered in future supplemental budget requests. If this action is taken, it should satisfy the intent of our recommendation. As we stated in our report, until OSD allows the services to consider anticipated operational equipment losses and maintenance washouts in their supplemental budgeting process, equipment reconstitution requirements generated during the current fiscal year will inevitably be pushed out for funding in upcoming years. Not doing so could also have an impact on the ability of the services to quickly reconstitute equipment in the current fiscal year. We are sending copies of this report to other appropriate congressional committees, the Secretary of Defense, the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, the Commandant of the Marine Corps, and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this letter, please contact me at (202) 512-8412 or solisw@gao.gov or my assistant director, Julia Denman, at (202) 512-4290 or denmanj@gao.gov. Other major contributors to this letter were John Strong, Bob Malpass, Andy Marek, Robert Wild, Dave Mayfield, and Charles Perdue. To determine if the process DOD used to develop its fiscal year 2004 supplemental budget equipment reconstitution requirements was accurate, we met with officials from the OSD Under Secretary of Defense (Comptroller); Assistant Secretary of the Army, Financial Management and Comptroller; Assistant Secretary of the Navy, Financial Management and Comptroller; Assistant Secretary of the Air Force, Financial Management and Comptroller; Headquarters U.S. Marine Corps Programs and Resources Department, and U.S. Marine Corps Deputy Commandant Installations and Logistics. We also collected and reviewed OSD and service equipment reconstitution guidance to develop an understanding of the processes the department used to develop its fiscal year 2004 equipment reconstitution requirements. As part of this effort we also met with officials from OSD Under Secretary of Defense (Comptroller); Headquarters U.S. Army Deputy Chief of Staff G-4; Headquarters U.S. Navy Deputy Chief of Naval Operations (Logistics); Headquarters U.S. Air Force Deputy Chief of Staff Installation and Logistics; and Headquarters U.S. Marine Corps Programs and Resources Department to identify the methodologies used by OSD and each of the services for determining equipment reconstitution requirements and collected related documentation. To gain further insight into the accuracy of the process the Army used to develop its equipment reconstitution requirement, we met with officials at and collected data from the Army Materiel Systems Analysis Activity; Forts Bragg, Campbell, Dix, Hood, Riley, and Stewart; and Camps Arifjan and Doha, Kuwait. We collected actual equipment reconstitution cost data from these activities and compared them to the data the Army used in developing its equipment reconstitution requirement. To determine the reliability of the actual reconstitution costs for Army equipment, we discussed and observed the equipment reconstitution data collection process at four of the Army bases we visited where we observed a consistent process for collecting and entering equipment reconstitution data into the Army’s database. We also visited the AMSAA team that was managing and summarizing the Army’s equipment reconstitution data collection effort to determine that the personnel collecting the data and managing the data collection effort were performing quality reviews to ensure completeness and accuracy. Additionally, the AMSAA management team had the Army commands review the data collected at their installations prior to passing the data to higher commands. Based on this assessment, we concluded that the data collection effort was sufficiently comprehensive and reliable to provide data for this engagement. We also collected and analyzed data the services use in developing their reconstitution requirements, which were submitted to OSD. Further, we collected and analyzed data OSD used to develop the department’s overall equipment reconstitution requirement that was included in the fiscal year 2004 global war-on-terrorism supplemental budget request. To gain further insight into how OSD developed the equipment reconstitution requirement, we met with IDA officials to develop an understanding of how OSD’s COST model calculates this requirement. We limited our examination of the COST model to the section that calculates equipment reconstitution requirements, which consists of only 4 of the 188 equations that comprise the COST model. To understand the extent to which equipment reconstitution requirements generated during fiscal year 2004 will have to be funded in upcoming budgets we met with and obtained guidance issued by the OSD Under Secretary of Defense (Comptroller) regarding how the services’ equipment reconstitution requirements were restricted. To quantify the effect of these limitations we collected Army data on potential equipment losses and estimated their possible impact on future budgets. To assess the equipment reconstitution requirement inputs provided to the services by their component commands and units we met with officials of and collected data at Army Forces Command, Naval Air Systems Command, U.S. Atlantic Fleet, Air Combat Command, and U.S. Marine Corps Logistics Command. Using the information and analysis described here we assessed the reasonableness and completeness of the department’s equipment reconstitution requirements. To determine how accurately and completely the department is tracking and reporting equipment reconstitution costs we met with officials of and collected documentation from the Assistant Secretary of the Army, Financial Management and Comptroller; Assistant Secretary of the Navy, Financial Management and Comptroller; Assistant Secretary of the Air Force, Financial Management and Comptroller; and Headquarters U.S. Marine Corps Programs and Resources Department. We reviewed the structure of how the services are accumulating and reporting obligations associated with their equipment reconstitution efforts. We compared and contrasted what type of obligations each service considered as equipment reconstitution for inclusion in the Defense Financial Accounting Service’s report that is tracking fiscal year 2004 global war-on-terrorism obligations. We discussed the inconsistencies between the services noted during our review with the service officials listed above to determine the reasons for the inconsistent manner in which equipment reconstitution obligations were reported. We performed our work from September 2003 through March 2005 in accordance with generally accepted government auditing standards.
The high pace of military operations in Iraq and elsewhere has generated a multibillion dollar equipment maintenance requirement that must be addressed after units return home. Upon returning from deployments, active, reserve, and National Guard units reconstitute, or restore, their equipment to a condition that enables them to conduct training and prepare for future deployments. The Department of Defense (DOD) uses a two-phased process to develop equipment reconstitution supplemental budget estimates. GAO reviewed this process for the fiscal year 2004 supplemental budget to determine (1) the extent to which the process produced reliable estimates of reconstitution requirements in the fiscal year 2004 supplemental budget, and (2) whether DOD is accurately tracking and reporting reconstitution costs. DOD's two-phased process to develop its fiscal year 2004 equipment reconstitution cost estimates contained weaknesses that produced errors, which may result in misstatements of future-year reconstitution cost requirements. The model DOD used to estimate costs in the first phase of the process generated unreliable estimates due to two main reasons. First, the model can overstate aircraft and ship reconstitution costs because these costs are covered in two different sections of the model. As a result, the model's estimate for Air Force aircraft reconstitution was overstated by over $1 billion. Second, there is uncertainty over what maintenance requirements the model covered. The Office of the Secretary of Defense (OSD) and the services developed their requirements with the understanding that the model did not calculate all maintenance requirements. GAO learned that the model may duplicate some requirements that the services manually calculated and included in their cost estimates. Consequently, DOD cannot have confidence that its equipment reconstitution budget estimate is reliable. There are also reconstitution estimating and guidance problems associated with the second phase of the process, where the services may develop alternative estimates outside of the model. For instance, the Army failed to consider funding in its baseline budget that would be available for equipment reconstitution. In another instance, the services included requirements in their reconstitution estimates that appear to go beyond equipment reconstitution as established by OSD's guidance. Nonetheless, GAO found an accumulation of unfulfilled equipment reconstitution requirements, because OSD guidance excluded the services from requesting funds for projected battle and other expected losses. The effect of losses not recognized in OSD's supplemental budget requirements have not yet been quantified and may be significant. GAO believes these problems are creating a backlog of equipment reconstitution requirements that will eventually need to be addressed in future budgets. DOD has not accurately tracked and reported its equipment reconstitution cost because the services are unable to segregate equipment reconstitution from other maintenance requirements as required. As a result, DOD cannot accurately report the cost of equipment reconstitution and, consequently, the total cost of the global war on terror. The Air Force does not break out its equipment reconstitution obligations from other global war-on-terrorism obligations in a DOD monthly cost report because it does not have a mechanism that can track the amounts obligated on equipment reconstitution and delineate such obligations from routine maintenance. Further, Army- and Navy-reported equipment reconstitution obligations are likely overstated in the monthly report because they include other maintenance costs--such as those related to equipment used in training exercises--that do not fall within DOD's description of equipment reconstitution.
As computer technology has advanced, federal agencies have become dependent on computerized information systems to carry out their operations and to process, maintain, and report essential information. Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions, deliver services to the public, and account for their resources without these information assets. Information security is thus especially important for federal agencies to ensure the confidentiality, integrity, and availability of their information and information systems. Conversely, ineffective information security controls can result in significant risk to a broad array of government operations and assets. Examples of such risks include the following: ● Resources, such as federal payments and collections, could be lost or stolen. ● Computer resources could be used for unauthorized purposes or to launch attacks on other computer systems. ● Sensitive information, such as taxpayer data, Social Security records, medical records, intellectual property, and proprietary business information, could be inappropriately disclosed, browsed, or copied for purposes of identity theft, espionage, or other types of crime. ● Critical operations, such as those supporting critical infrastructure, national defense, and emergency services, could be disrupted. ● Data could be added, modified, or deleted for purposes of fraud, subterfuge, or disruption. ● Agency missions could be undermined by embarrassing incidents that result in diminished confidence in the ability of federal organizations to conduct operations and fulfill their responsibilities. Cyber threats to federal information systems and cyber-based critical infrastructures are evolving and growing. In September 2007, we reported that these threats can be unintentional and intentional, targeted or nontargeted, and can come from a variety of sources. Unintentional threats can be caused by inattentive or untrained employees, software upgrades, maintenance procedures, and equipment failures that inadvertently disrupt systems or corrupt data. Intentional threats include both targeted and nontargeted attacks. A targeted attack is when a group or individual attacks a specific system or cyber-based critical infrastructure. A nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or other malicious software is released on the Internet with no specific target. Government officials are concerned about attacks from individuals and groups with malicious intent, such as criminals, terrorists, and adversarial foreign nations. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical information systems, including foreign nations engaged in espionage and information warfare, domestic criminals, hackers, virus writers, and disgruntled employees and contractors working within an organization. Table 1 summarizes those groups and types of individuals that are considered to be key sources of cyber threats to our nation’s information systems and cyber infrastructures. These groups and individuals have a variety of attack techniques at their disposal. Furthermore, as we have previously reported, the techniques have characteristics that can vastly enhance the reach and impact of their actions, such as the following: ● Attackers do not need to be physically close to their targets to perpetrate a cyber attack. ● Technology allows actions to easily cross multiple state and national borders. ● Attacks can be carried out automatically, at high speed, and by attacking a vast number of victims at the same time. ● Attackers can more easily remain anonymous. The growing connectivity between information systems, the Internet, and other infrastructures creates opportunities for attackers to disrupt telecommunications, electrical power, and other critical services. As government, private sector, and personal activities continue to move to networked operations, as digital systems add ever more capabilities, as wireless systems become more ubiquitous, and as the design, manufacture, and service of information technology have moved overseas, the threat will continue to grow. Over the past year, cyber exploitation activity has grown more sophisticated, more targeted, and more serious. For example, the Director of National Intelligence stated that, in August 2008, the Georgian national government’s Web sites were disabled during hostilities with Russia, which hindered the government’s ability to communicate its perspective about the conflict. The director expects disruptive cyber activities to become the norm in future political and military conflicts. Consistent with the evolving and growing nature of the threats to federal systems, agencies are reporting an increasing number of security incidents. These incidents put sensitive information at risk. Personally identifiable information about Americans has been lost, stolen, or improperly disclosed, thereby potentially exposing those individuals to loss of privacy, identity theft, and financial crimes. Reported attacks and unintentional incidents involving critical infrastructure systems demonstrate that a serious attack could be devastating. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. When incidents occur, agencies are to notify the federal information security incident center—the United States Computer Emergency Readiness Team (US-CERT). As shown in figure 1, the number of incidents reported by federal agencies to US-CERT has increased dramatically over the past 3 years, increasing from 5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008 (about a 206 percent increase). The three most prevalent types of incidents reported to US-CERT during fiscal years 2006 through 2008 were unauthorized access (where an individual gains logical or physical access to a system without permission), improper usage (a violation of acceptable computing use policies), and investigation (unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review). The growing threats and increasing number of reported incidents highlight the need for effective information security policies and practices. However, serious and widespread information security control deficiencies continue to place federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. In their fiscal year 2008 performance and accountability reports, 20 of 24 major agencies indicated that inadequate information system controls over financial systems and information were either a significant deficiency or a material weakness for financial statement reporting (see fig. 2). Similarly, our audits have identified control deficiencies in both financial and nonfinancial systems, including vulnerabilities in critical federal systems. For example, we reported in September 2008 that, although the Los Alamos National Laboratory—one of the nation’s weapons laboratories—implemented measures to enhance the information security of its unclassified network, vulnerabilities continued to exist in several critical areas. In addition, in May 2008 we reported that the Tennessee Valley Authority (TVA)—a federal corporation and the nation’s largest public power company that generates and transmits electricity us its 52 fossil, hydro, and nuclear power plants and transmission facilities—had not fully implemented appropriate security practice to secure the control systems used to operate its critical s infrastructures. Similarly, in October 2009 we reported that the National Aeronautics and Space Administration (NASA)—the civilian agency that oversees U.S. aeronautical and space activities—had not always implemented appropriate controls to sufficiently protect the confidentiality, integrity, and availability of the information and systems supporting its mission directorates. Over the last several years, most agencies have not implemented controls sufficiently to prevent, limit, or detect unauthorized access to computer networks, systems, or information. Our analysis of inspectors general, agency, and our own reports determined that agencies did not have adequate controls in place to ensure that only authorized individuals could access or manipulate data on their systems and networks. To illustrate, weaknesses were reported in such controls at 23 of 24 major agencies for fiscal year 2008. For example, agencies did not consistently (1) identify and authenticate users to prevent unauthorized access; (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (3) establish sufficient boundary protection mechanisms; (4) apply encryption to protect sensitive data on networks and portable devices; and (5) log, audit, and monitor security-relevant events. At least nine agencies also lacked effective controls to restrict physical access to information assets. We previously reported that many of the data losses occurring at federal agencies over the past few years were a result of physical thefts or improper safeguarding of systems, including laptops and other portable devices. An underlying cause of information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented key elements for an agencywide information security program. An agencywide security program, required by the Federal Information Security Management Act (FISMA), is intended to provide a framework and continuing cycle of activities, including assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. Our analysis determined that 23 of 24 major federal agencies had weaknesses in their agencywide information security programs. Due to the persistent nature of these vulnerabilities and associated risks, we continued to designate information security as a governmentwide high-risk issue in our most recent biennial report to Congress, a designation we have made in each report since 1997. Over the past several years, we and inspectors general have made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. For example, we recommended that agencies correct specific information security deficiencies related to user identification and authentication, authorization, boundary protections, cryptography, audit and monitoring, physical security, configuration management, segregation of duties, and contingency planning. We have also recommended that agencies fully implement comprehensive, agencywide information security programs by correcting weaknesses in risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. The effective implementation of these recommendations will strengthen the security posture at these agencies. Agencies have implemented or are in the process of implementing many of our recommendations. In June 2009 we proposed a list of suggested actions that could improve FISMA and its associated implementing guidance, including (1) clarifying requirements for testing and evaluating security controls; (2) requiring agency heads to provide an assurance statement on the overall adequacy and effectiveness of the agency’s information security program; (3) enhancing independent annual evaluations; and (4) strengthening annual reporting mechanisms. In addition, the White House, OMB, and certain federal agencies have undertaken several governmentwide initiatives that are intended to enhance information security at federal agencies. These key initiatives are discussed below. ● Comprehensive National Cybersecurity Initiative: In January 2008, President Bush began to implement a series of initiatives aimed primarily at improving the Department of Homeland Security’s (DHS) and other federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. While details of these initiatives have not been made public, the Director of National Intelligence stated that they include defensive, offensive, research and development, and counterintelligence efforts, as well as a project to improve public-private partnerships. ● The Information Systems Security Line of Business: The goal of this initiative, led by OMB, is to improve the level of information systems security across government agencies and reduce costs by sharing common processes and functions for managing information systems security. Several agencies have been designated as service providers for computer security awareness training and FISMA reporting. ● Federal Desktop Core Configuration: For this initiative, OMB directed agencies that have Windows XP and/or Windows Vista operating systems deployed to adopt the security configurations developed by the National Institute of Standards and Technology, the Department of Defense, and DHS. The goal of this initiative is to improve information security and reduce overall information technology operating costs. ● Einstein: This is a computer network intrusion detection system that analyzes network flow information from participating federal agencies. The system is to provide a high-level perspective from which to observe potential malicious activity in computer network traffic of participating agencies’ computer networks. ● Trusted Internet Connections Initiative: This is an effort designed to optimize individual agency network services into a common solution for the federal government. The initiative is to facilitate the reduction of external connections, including Internet points of presence. We currently have ongoing work that addresses the status, planning, and implementation efforts of several of these initiatives. Federal law and policyto protect our nation’s computer-reliant critical infrastructures—a practice known as cyber critical infrastructure protection, or cyber CIP. We have reported since 2005 that DHS has yet to fully satisfy its establish DHS as the focal point for efforts key responsibilities for protecting these critical infrastructures. Our reports included recommendations that are essential for DHS to address in order to fully implement its responsibilities. We summarized these recommendations into key areas listed in table 2. DHS has since developed and implemented certain capabilities to satisfy aspects of its responsibilities, but the department still has not fully implemented our recommendations, and thus further action needs to be taken to address these areas. For example, in July 2008, we reported that DHS’s US-CERT did not fully address 15 key attributes of cyber analysis and warning capabilities related to (1) monitoring network activity to detect anomalies, (2) analyzing information and investigating anomalies to determine whether they are threats, (3) warning appropriate officials with timely and actionable threat and mitigation information, and (4) responding to the threat. For example, US-CERT provided warnings by developing and distributing a wide array of notifications; however, these notifications were not consistently actionable or timely. As a result, we recommended that the department address shortfalls associated with the 15 attributes in order to fully establish a national cyber analysis and warning capability as envisioned in the national strategy. DHS agreed in large part with our recommendations. Similarly, in September 2008, we reported that since conducting a major cyber attack exercise, called Cyber Storm, DHS had demonstrated progress in addressing eight lessons it had learned from these efforts. However, its actions to address the lessons had not been fully implemented. Specifically, while it had completed 42 of the 66 activities identified, the department had identified 16 activities as ongoing and 7 as planned for the future. Consequently, we recommended that DHS schedule and complete all of the corrective activities identified in order to strengthen coordination between public and private sector participants in response to significant cyber incidents. DHS concurred with our recommendation. Since that time, DHS has continued to make progress in completing some identified activities but has yet to do so for others. Because the threats to federal information systems and critical infrastructure have persisted and grown, efforts have recently been undertaken by the executive branch to review the nation’s cybersecurity strategy. As we previously stated, in January 2008 the Comprehensive National Cybersecurity Initiative was established with its primary aim to improve federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. In February 2009, President Obama directed the National Security Council and Homeland Security Council to conduct a comprehensive review to assess the United States’ cybersecurity-related policies and structures. The resulting report, “Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure,” recommended, among other things, appointing an official in the White House to coordinate the nation’s cybersecurity policies and activities, creating a new national cybersecurity strategy, and developing a framework for cyber research and development. We recently initiated a review to assess the progress made by the executive branch in implementing the policy’s recommendations. We also testified in March 2009 on needed improvements to the nation’s cybersecurity strategy. In preparation for that testimony, we obtained the views of experts (by means of panel discussions) on critical aspects of the strategy, including areas for improvement. The experts, who included former federal officials, academics, and private sector executives, highlighted 12 key improvements that are, in their view, essential to improving the strategy and our national cybersecurity posture. The key strategy improvements identified by cybersecurity experts are listed in table 3. These recommended improvements to the national strategy are in large part consistent with our previous reports and extensive research and experience in this area. Until they are addressed, our nation’s most critical federal and private sector cyber infrastructure remain at unnecessary risk to attack from our adversaries. In summary, the threats to federal information systems are evolving and growing, and federal systems are not sufficiently protected to consistently thwart the threats. Unintended incidents and attacks from individuals and groups with malicious intent, such as criminals, terrorists, and adversarial foreign nations, have the potential to cause significant damage to the ability of agencies to effectively perform their missions, deliver services to constituents, and account for their resources. To help in meeting these threats, opportunities exist to improve information security throughout the federal government. The White House, OMB, and certain federal agencies have initiated efforts that are intended to strengthen the protection of federal information and information systems. In addition, the prompt and effective implementation of the hundreds of recommendations by us and by agency inspectors general to mitigate information security control deficiencies and fully implement agencywide security programs would also strengthen the protection of federal information systems, as would efforts by DHS to develop better capabilities to meets its responsibilities, and the implementation of recommended improvements to the national cybersecurity strategy. Until agencies fully and effectively implement these recommendations, federal information and systems will remain vulnerable. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov, or David A. Powner at (202) 512-9286 or pownerd@gao.gov. Other key contributors to this statement include John de Ferrari (Assistant Director), Matthew Grote, Nick Marinos, and Lee McCracken. Information Security: NASA Needs to Remedy Vulnerabilities in Key Networks. GAO-10-4. Washington, D.C.: October 15, 2009. Information Security: Concerted Effort Needed to Improve Federal Performance Measures. GAO-09-617. Washington, D.C.: September 14, 2009. Information Security: Agencies Continue to Report Progress, but Need to Mitigate Persistent Weaknesses. GAO-09-546. Washington, D.C.: July 17, 2009. Cybersecurity: Continued Federal Efforts Are Needed to Protect Critical Systems and Information. GAO-09-835T. Washington, D.C.: June 25, 2009. Privacy and Security: Food and Drug Administration Faces Challenges in Establishing Protections for Its Postmarket Risk Analysis System. GAO-09-355. Washington, D.C.: June 1, 2009. Aviation Security: TSA Has Completed Key Activities Associated with Implementing Secure Flight, but Additional Actions Are Needed to Mitigate Risks. GAO-09-292. Washington, D.C.: May 13, 2009. Information Security: Cyber Threats and Vulnerabilities Place Federal Systems at Risk. GAO-09-661T. Washington, D.C.: May 5, 2009. Freedom of Information Act: DHS Has Taken Steps to Enhance Its Program, but Opportunities Exist to Improve Efficiency and Cost- Effectiveness. GAO-09-260. Washington, D.C.: March 20, 2009. Information Security: Securities and Exchange Commission Needs to Consistently Implement Effective Controls. GAO-09-203. Washington, D.C.: March 16, 2009. National Cyber Security Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Information Security: Further Actions Needed to Address Risks to Bank Secrecy Act Data. GAO-09-195. Washington, D.C.: January 30, 2009. Information Security: Continued Efforts Needed to Address Significant Weaknesses at IRS. GAO-09-136. Washington, D.C.: January 9, 2009. Nuclear Security: Los Alamos National Laboratory Faces Challenges in Sustaining Physical and Cyber Security Improvements. GAO-08-1180T. Washington, D.C.: September 25, 2008. Critical Infrastructure Protection: DHS Needs to Better Address Its Cyber Security Responsibilities. GAO-08-1157T. Washington, D.C.: September 16, 2008. Critical Infrastructure Protection: DHS Needs to Fully Address Lessons Learned from Its First Cyber Storm Exercise. GAO-08-825. Washington, D.C.: September 9, 2008. Information Security: Actions Needed to Better Protect Los Alamos National Laboratory’s Unclassified Computer Network. GAO-08-1001. Washington, D.C.: September 9, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Information Security: Federal Agency Efforts to Encrypt Sensitive Information Are Under Way, but Work Remains. GAO-08-525. Washington, D.C.: June 27, 2008. Information Security: FDIC Sustains Progress but Needs to Improve Configuration Management of Key Financial Systems. GAO-08-564. Washington, D.C.: May 30, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Information Security: TVA Needs to Enhance Security of Critical Infrastructure Control Systems and Networks. GAO-08-775T. Washington, D.C.: May 21, 2008. Information Security: Progress Reported, but Weaknesses at Federal Agencies Persist. GAO-08-571T. Washington, D.C.: March 12, 2008. Information Security: Securities and Exchange Commission Needs to Continue to Improve Its Program. GAO-08-280. Washington, D.C.: February 29, 2008. Information Security: Although Progress Reported, Federal Agencies Need to Resolve Significant Deficiencies. GAO-08-496T. Washington, D.C.: February 14, 2008. Information Security: Protecting Personally Identifiable Information. GAO-08-343. Washington, D.C.: January 25, 2008. Information Security: IRS Needs to Address Pervasive Weaknesses. GAO- 08-211. Washington, D.C.: January 8, 2008. Veterans Affairs: Sustained Management Commitment and Oversight Are Essential to Completing Information Technology Realignment and Strengthening Information Security. GAO-07-1264T. Washington, D.C.: September 26, 2007. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems Are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Information Security: Sustained Management Commitment and Oversight Are Vital to Resolving Long-standing Weaknesses at the Department of Veterans Affairs. GAO-07-1019. Washington, D.C.: September 7, 2007. Information Security: Selected Departments Need to Address Challenges in Implementing Statutory Requirements. GAO-07-528. Washington, D.C.: August 31, 2007. Information Security: Despite Reported Progress, Federal Agencies Need to Address Persistent Weaknesses. GAO-07-837. Washington, D.C.: July 27, 2007. Information Security: Homeland Security Needs to Immediately Address Significant Weaknesses in Systems Supporting the US-VISIT Program. GAO-07-870. Washington, D.C.: July 13, 2007. Information Security: Homeland Security Needs to Enhance Effectiveness of Its Program. GAO-07-1003T. Washington, D.C.: June 20, 2007. Information Security: Agencies Report Progress, but Sensitive Data Remain at Risk. GAO-07-935T. Washington, D.C.: June 7, 2007. Information Security: Federal Deposit Insurance Corporation Needs to Sustain Progress Improving Its Program. GAO-07-351. Washington, D.C.: May 18, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Pervasive and sustained cyber attacks continue to pose a potentially devastating threat to the systems and operations of the federal government. In recent months, federal officials have cited the continued efforts of foreign nations and criminals to target government and private sector networks; terrorist groups have expressed a desire to use cyber attacks to target the United States; and press accounts have reported attacks on the Web sites of government agencies. The ever-increasing dependence of federal agencies on computerized systems to carry out essential, everyday operations can make them vulnerable to an array of cyber-based risks. Thus it is increasingly important for the federal government to have effective information security controls in place to safeguard its systems and the information they contain. GAO was asked to provide a statement describing (1) cyber threats to federal information systems and cyber-based critical infrastructures, (2) control deficiencies at federal agencies that make these systems and infrastructures vulnerable to cyber threats, and (3) opportunities that exist for improving federal cybersecurity. In preparing this statement, GAO relied on its previously published work in this area. Cyber-based threats to federal systems and critical infrastructure are evolving and growing. These threats can be unintentional or intentional, targeted or non-targeted, and can come from a variety of sources, including criminals, terrorists, and adversarial foreign nations, as well as hackers and disgruntled employees. These potential attackers have a variety of techniques at their disposal, which can vastly enhance the reach and impact of their actions. For example, cyber attackers do not need to be physically close to their targets, their attacks can easily cross state and national borders, and cyber attackers can more easily preserve their anonymity. Further, the growing interconnectivity between information systems, the Internet, and other infrastructure presents increasing opportunities for such attacks. In addition, reports of security incidents from federal agencies are on the rise, increasing by over 200 percent from fiscal year 2006 to fiscal year 2008. Compounding the growing number and kinds of threats, GAO--along with agencies and their inspectors general--has identified significant weaknesses in the security controls on federal information systems, resulting in pervasive vulnerabilities. These include deficiencies in the security of financial systems and information and vulnerabilities in other critical federal information systems. GAO has identified weaknesses in all major categories of information security controls at federal agencies. For example, in fiscal year 2008, weaknesses were reported in such controls at 23 of 24 major agencies. Specifically, agencies did not consistently authenticate users to prevent unauthorized access to systems; apply encryption to protect sensitive data; and log, audit, and monitor security-relevant events, among other actions. An underlying cause of these weaknesses is agencies' failure to fully or effectively implement information security programs, which entails assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of security controls, and implementing appropriate remedial actions. Multiple opportunities exist to enhance cybersecurity. In light of weaknesses in agencies' information security controls, GAO and inspectors general have made hundreds of recommendations to improve security, many of which agencies are implementing. In addition, the White House and the Office of Management and Budget, collaborating with other agencies, have launched several initiatives aimed at improving aspects of federal cybersecurity. The Department of Homeland Security, which plays a key role in coordinating cybersecurity activities, also needs to fulfill its responsibilities, such as developing capabilities for protecting cyber-reliant critical infrastructures and implementing lessons learned from a major cyber simulation exercise. Finally, a panel of experts convened by GAO made several recommendations for improving the nation's cybersecurity strategy. Realizing these opportunities for improvement can help ensure that the federal government's systems, information, and critical cyber-reliant infrastructure are effectively protected.
Almost 4 years after Hurricane Katrina, the children living in the greater New Orleans area may be at particular risk for needing mental health services, but certain barriers may impede the delivery of such care. Since Hurricane Katrina, there has been increasing emphasis on providing community-based, rather than hospital-based, mental health services for low-income and uninsured children in the greater New Orleans area. Multiple federal agencies support the provision of mental health and related services for these children through various programs. Children in the greater New Orleans area may be at particular risk for needing mental health services. Researchers at LSU Health Sciences Center have conducted semiannual mental health screenings in selected schools in the greater New Orleans area since Hurricane Katrina. One of the lead LSU Health Sciences Center researchers told us that they had screened about 12,000 area children as of January 2008; of the children screened in January 2008, 30 percent met the threshold for a possible mental health referral. Although this was a decrease from the 49 percent level during the 2005-06 school year screening, the rate of decline was slower than experts had expected. The LSU Health Sciences Center lead researcher we spoke with interpreted this slower-than-expected decline as indicating that the mental health needs of children in the greater New Orleans area continue to be significant. The effects of a traumatic event can persist for years. For example, a 2006 study on the use of counseling services by people affected by the 2001 World Trade Center attack found that some people first sought counseling services more than 2 years after the event. Research has shown that children who grow up in poverty as well as those who are exposed to violence during or after a catastrophic disaster are at risk for the development of mental health disorders. In 2007 the poverty rate for each of the four parishes in the greater New Orleans area was higher than the national average, and in Orleans and St. Bernard parishes, the rate was at least twice the national average. People who have experienced or witnessed certain incidents, including serious physical injury, during or after a catastrophic disaster can face an array of psychological consequences. The LSU Health Sciences Center lead researcher we spoke with told us that January 2008 data showed that 16 to 21 percent of children screened had a family member who had been injured in Hurricane Katrina, and 13 to 18 percent of children screened had a family member who had been killed in the hurricane. The President’s 2003 New Freedom Commission on Mental Health determined that many barriers can impede delivery of services for people with mental illness. The commission specifically identified stigma, cost, not knowing where or how to obtain services, unavailable services, workforce shortages, and a fragmented mental health delivery system as barriers. The stigma surrounding mental illness—negative attitudes and beliefs about mental illness that can deter people from seeking treatment—was described as a pervasive barrier preventing Americans from understanding the importance of mental health. The commission also noted that there was a national shortage of mental health providers and a lack of providers trained in evidence-based practices. The commission recommended early intervention, education, and screening in low-stigma settings—such as primary care and school settings—as ways to prevent mental health problems in children from worsening. Before Hurricane Katrina, health care services for low-income and uninsured children and families in the greater New Orleans area were primarily hospital-based. These individuals had access to mental health services through Charity and University hospitals, which were a major source of psychiatric care for the area. About half of the patients served by these hospitals were uninsured, and about one-third were covered by Medicaid. Since Hurricane Katrina and the subsequent reduction in hospital capacity, according to state and local officials, there has been an increasing emphasis on providing community-based mental health services, including through school-based health centers (SBHC) and other programs that provide mental health services in schools. In general, SBHCs are located in schools or on school grounds and provide a comprehensive range of primary care services to children. Louisiana’s SBHCs provide mental health services in addition to other primary care services. The LDHH Office of Public Health operates the Adolescent School Health Initiative, which facilitates the establishment of SBHCs in Louisiana, establishes standards for SBHCs, and monitors their quality of care. Each SBHC is administered by a sponsor organization, such as a hospital or school, and is required to have a mental health provider on staff. A parent or guardian must sign a written consent form for a student to receive services at an SBHC. Some children can gain access to mental health services through the regional human services districts, to which LDHH’s Office of Mental Health, Office for Addictive Disorders, and Office for Citizens with Developmental Disabilities give funding to provide services in certain areas of the state. The regional human services districts operate and manage community-based programs and services, including mental health services. In the greater New Orleans area, the Jefferson Parish Human Services Authority serves Jefferson Parish, and the Metropolitan Human Services District serves Orleans, Plaquemines, and St. Bernard parishes. Multiple federal agencies support the provision of mental health and related services for children in the greater New Orleans area through various programs, including grant programs. (See app. II for information on selected federal programs that support mental health and related services for children. See app. III for information on selected services provided to children by these programs.) HHS supports the provision of mental health services for children in the greater New Orleans area through several of its agencies, including SAMHSA, HRSA, CMS, and ACF. SAMHSA, which has the primary federal responsibility for children’s mental health services, works to improve the availability of effective mental health services, substance abuse prevention and treatment services, and related services through formula grant programs—such as the Community Mental Health Services Block Grant— and discretionary grant programs—such as the National Child Traumatic Stress Initiative and the Child Mental Health Initiative. HRSA works to improve health care systems and access to health care for uninsured and medically vulnerable populations. Its Health Center Program supports health centers in the greater New Orleans area that provide primary care services, including mental health services, to adults and children. In addition, HRSA supports the provision of mental health services to children through formula and discretionary grant programs, such as the Maternal and Child Health Services Block Grant and the Bureau of Clinician Recruitment and Service’s National Health Service Corps Scholarship Program and Loan Repayment Program. CMS provides funding for health care coverage for its programs’ beneficiaries and administers certain additional grant programs related to Hurricane Katrina. CMS administers Medicaid and the State Children’s Health Insurance Program (CHIP), and the programs are jointly financed by the federal government and the states. Medicaid and CHIP represent a significant federal funding source for health services, including mental health services, for children in Louisiana. For example, in state fiscal year 2008, the Louisiana Medicaid and LaCHIP programs reimbursed almost $9.4 million to providers for over 66,000 claims for mental health services for children in the greater New Orleans area. Over 110,000 children in the greater New Orleans area were enrolled in these two programs as of August 2008. The programs cover inpatient psychiatric services, psychological and behavioral services provided by licensed psychologists, physician psychiatric services, and services of licensed clinical social workers when provided in certain settings. CMS also administers additional grant programs related to Hurricane Katrina, including the Primary Care Access and Stabilization Grant (PCASG), a program intended to assist in the restoration and expansion of outpatient primary care services, including mental health services, in the greater New Orleans area; the Professional Workforce Supply Grant, intended to address shortages in the professional health care workforce; and the Provider Stabilization Grants, a program intended to assist health care facilities that participate in Medicare to recruit and retain staff. ACF administers programs that promote the economic and social well- being of children, families, and communities. It supports counseling and treatment services, education, prevention initiatives, and ancillary services such as transportation through programs such as the Child Care and Development Fund and the Head Start program. In addition, in 2006 ACF distributed emergency supplemental Social Services Block Grant (SSBG) funding to Louisiana that in part supported mental health services. In addition to the HHS agencies, other federal agencies also support the provision of mental health and related services to children in the greater New Orleans area. Education supports mental health services for children through school violence prevention and substance abuse prevention programs, such as the Safe and Drug-Free Schools and Communities State Education Agency and Governors’ Grants. DOJ supports mental health services for children who have been victims of crime through its Crime Victim Assistance program. Some programs are the shared responsibility of multiple agencies. The Department of Homeland Security’s Federal Emergency Management Agency (FEMA) and SAMHSA are partners in administering the Crisis Counseling Assistance and Training Program (CCP), which provides crisis counseling services after events for which a presidential disaster declaration has been made. The CCP provided funding to LDHH’s Office of Mental Health, the state CCP grantee, for crisis counseling services in the greater New Orleans area after Hurricanes Katrina and Rita. FEMA also supported case management services for victims of Hurricanes Katrina and Rita through the Disaster Housing Assistance Program, which is administered by HUD. In addition to federal programs, state funding and donations also support mental health and related services to children in the greater New Orleans area. For example, a grant from the W.K. Kellogg Foundation is helping to support SBHCs in New Orleans. Louisiana must provide matching funds as a requirement of its receipt of some federal grants, so federal funding may represent only a portion of the total funding. For example, both HRSA’s Maternal and Child Health Services Block Grant and SAMHSA’s Child Mental Health Initiative require the state to match federal grant funds. Stakeholder organizations that participated in our structured interviews and responded to our DCI most frequently identified lack of mental health providers and sustainability of funding as barriers to providing mental health services to children in the greater New Orleans area. These organizations most frequently identified a lack of transportation, competing family priorities, and concern regarding stigma as barriers to families’ obtaining mental health services for children. A lack of mental health providers in the greater New Orleans area was the most frequently identified barrier to providing services to children among the stakeholder organizations that participated in our structured interviews. (See table 1.) Fifteen of the 18 organizations identified a lack of mental health providers—including challenges recruiting and retaining child psychiatrists, psychologists, and nurses—as a barrier. Several organizations specifically described challenges in recruiting and retaining staff with particular training, such as in evidence-based practices or treatment of children and adolescents. One organization said that while a nationwide shortage of trained mental health providers contributed to recruitment difficulties before Hurricane Katrina, the hurricane exacerbated the situation because many providers left the greater New Orleans area. In their responses to the DCI, 14 of the 15 organizations reported that recruitment was more challenging now than before Hurricane Katrina, and 12 of the 15 reported that retention was more challenging now than before Hurricane Katrina. Other developments underscore the lack of mental health providers as a barrier. For example, HRSA designated the parishes in the greater New Orleans area as health professional shortage areas (HPSA) for mental health in late 2005 and early 2006; before Hurricane Katrina, none of the parishes had this designation for mental health. HRSA’s ARF data also indicate that the greater New Orleans area has experienced more of a decrease in mental health providers than some other parts of the country. For example, the ARF data documented a 21 percent decrease in the number of psychiatrists in the greater New Orleans area from 2004 to 2006, during which time there was a 1 percent decrease in Wayne County, Michigan (which includes Detroit and which had pre-Katrina poverty and demographic characteristics similar to those of the greater New Orleans area) and a 3 percent increase in counties nationwide. Furthermore, LDHH data showed a 25 percent decrease in the number of mental health providers in the greater New Orleans area—including psychiatrists and licensed clinical social workers—who participated in Medicaid and LaCHIP from state fiscal year 2004 to state fiscal year 2008. Sustainability of funding—including difficulty securing reliable funding sources and limitations on reimbursement for services—was the second most frequently identified barrier to providing services for children. Thirteen of the 18 organizations identified sustainability of funding as a barrier. One organization stated that there was a need to secure sustainable funding from public and private sources because individuals and organizations that had provided funding before Hurricane Katrina were no longer donating because they were leaving the greater New Orleans area. Two organizations said that the ability to obtain reimbursement for mental health services provided outside of traditional clinic settings, such as in schools, would allow some of these services to be sustained over the long term. Organizations that participated in the structured interviews identified several additional barriers to providing services for children. Availability of referral services—including the limited availability of space at inpatient psychiatric hospitals and other types of treatment facilities—was identified as a barrier by five organizations. One organization noted that in order to place children in residential treatment for mental illness, it had to compete for beds in Shreveport—located 5 hours outside the greater New Orleans area—or potentially send children out of state. In either case, regular family involvement in treatment, which experts say is important for treatment success, would be limited. Three organizations identified a lack of coordination between mental health providers or other providers serving children as a barrier. A 2006 review of the mental health system in Louisiana found that children with mental health problems could receive services through multiple systems—such as primary health care, schools, and social services—and that the lack of coordination and communication among these systems could result in providers not providing services to children who need them or providing duplicated services. Finally, two organizations identified availability of physical space in which to house programs as a barrier. One organization said that more than 3 years after Hurricane Katrina, providers still had difficulty locating physical space. A lack of transportation in the greater New Orleans area was the most frequently identified barrier to obtaining mental health services for children among the stakeholder organizations that participated in our structured interviews. (See table 2.) Twelve of the 18 organizations identified a lack of transportation as a barrier. For example, 1 organization told us that it was difficult for children and families to travel to clinics to obtain services because the bus system was not running at full capacity and high gas prices in 2008 made travel by car more expensive. Another organization mentioned that more families had cars before Hurricane Katrina, but many of these vehicles were destroyed in the flooding. Furthermore, in their DCI responses, 10 of the 12 organizations reported that transportation was more challenging now than before Hurricane Katrina. Competing family priorities—including dealing with housing problems, unemployment, and financial concerns—was tied as the second most frequently identified barrier to obtaining services for children. Competing family priorities was identified as a barrier by 11 of the 18 organizations, and in their DCI responses, 10 of the 11 organizations reported that family stress was more challenging now than before Hurricane Katrina. One organization told us that families were focused on issues such as rebuilding their homes and reestablishing their lives and that mental health concerns were seen as a low priority. The organization added that in the greater New Orleans area the cost of living, such as for rent and food, had risen. For example, the average fair market rent in the New Orleans Metropolitan Statistical Area for a two-bedroom unit rose about 40 percent—from $676 to $949 per month—from fiscal year 2005 to fiscal year 2009, exceeding the estimated affordable monthly rent for a resident earning the average income of about $37,000 a year. Concern regarding the stigma that is associated with receiving mental health services was the other barrier to obtaining services for children that was identified second most frequently—by 11 organizations. One organization said that a perception existed that a parent, by seeking out mental health services for his or her child, was labeling that child as “crazy.” In their DCI responses, 7 of the 11 organizations reported that concern regarding stigma was as challenging now as it was before Hurricane Katrina. Several organizations, however, told us that although individuals may continue to have concern about stigma if their own child is identified as needing mental health services, they have also observed more acceptance of the idea of mental health services in general. Organizations identified several additional barriers to obtaining children’s mental health services in the greater New Orleans area. A lack of service availability—including the availability of translation services and the ability to easily obtain an appointment—was identified as a barrier by eight organizations. For example, one organization told us that one parish’s high schools had students from up to 50 different ethnic groups, including a larger number of non-English-speaking students than before Hurricane Katrina. Although the children were learning English, the teachers and administrators were challenged in trying to communicate with the parents and to preserve confidentiality when using an interpreter. In addition, five of the eight organizations reported in their DCI responses that the availability of translation and interpretation services was more challenging now than before Hurricane Katrina. Three organizations identified not knowing where to go to obtain services as a barrier. For example, one organization said that before the hurricane many people knew mental health services were available at Charity Hospital, but that following its closing fewer people were aware of alternate locations offering such services. All three organizations reported in their DCI responses that not knowing where to go for services was a more challenging barrier now than before Hurricane Katrina. Finally, the lack of health insurance was identified as a barrier by two organizations. One organization said that many parents were overwhelmed by the process of signing up their children for LaCHIP, especially because living in multiple states complicated the process. A range of federal programs address the most frequently identified barriers to providing and obtaining mental health services for children, but much of the funding for these programs is temporary. Since Hurricane Katrina, SBHCs have emerged as a key approach to addressing barriers to obtaining services, and state agencies have used federal funding to support these clinics. We found that the federal programs in our review provided funding that addresses four of the five most frequently identified barriers but that much of it was temporary and did not fully address the remaining barrier in this group, sustainability of funding. (See app. II and app. III, respectively, for additional information on the federal programs in our review and selected services supported by these programs.) Lack of mental health providers. After Hurricane Katrina, the greater New Orleans area received funding from CMS and HRSA programs to address a general lack of providers, including children’s mental health providers. For example, as of May 2008, CMS’s Professional Workforce Supply Grant, created with the intent to recruit and retain health professionals in the greater New Orleans area, was used to provide financial incentives to 82 mental health providers who agreed to either take a new position or continue in a position in the greater New Orleans area and to serve for at least 3 years. This funding will be available through September 2009. About two-thirds of the provider organizations receiving PCASG funds told us they used some of the funding to hire mental health providers; these funds will be available through September 2010. In addition, through CMS’s Provider Stabilization Grants, awarded to Louisiana to help health care facilities hire and retain providers, $52,001 was provided in June 2007 to community mental health centers in Orleans Parish that serve children. As of October 2008, HRSA’s Bureau of Clinician Recruitment and Service, which provides student loan repayment and scholarships to providers serving in designated HPSAs, was supporting 7 mental health professionals in the greater New Orleans area—4 social workers, 2 psychologists, and 1 child psychiatrist. A few federal programs support training of children’s mental health providers, which helps address a lack of providers trained in children’s mental health, which was identified as a barrier in our structured interviews. SAMHSA’s National Child Traumatic Stress Initiative awarded two grants in October 2008 to providers in the greater New Orleans area to provide training on, implement, and evaluate trauma-focused treatment for children. For example, providers in the greater New Orleans area were trained on various trauma-related interventions, which included evidence- based practices that are delivered in schools. In addition, the Children’s Health Fund Community Support and Resiliency Program, whose funding from SAMHSA expires in September 2009, provides comprehensive training and technical assistance on the assessment and treatment of trauma in children for medical, mental health, education, and child care professionals in the greater New Orleans area. Lack of transportation. Although none of the federal programs included in our review are designed solely to provide transportation for children obtaining mental health services, officials we interviewed told us that funding from several federal programs has been used in that way. For example, Louisiana designated $150,000 in the fiscal year 2009 Community Mental Health Services Block Grant state plan for transportation for children in the greater New Orleans area, and funding from ACF’s 2006 SSBG supplemental grant and SAMHSA’s Child Mental Health Initiative has also been used to supply transportation to mental health appointments for children. Louisiana Medicaid officials told us that the Louisiana Medicaid program provides reimbursement for nonemergency, previously authorized transportation for enrolled children for any Medicaid-covered service and for medical emergencies, including transportation to inpatient mental health facilities. Louisiana Medicaid also provides reimbursement to family or friends who provide medically necessary transportation for Medicaid enrollees and provides reimbursement for home- or community- based treatment, which can reduce the need for transportation to provider offices. SAMHSA’s National Child Traumatic Stress Initiative has two grantees in the greater New Orleans area that provide trauma-focused mental health services to children in schools, which can also reduce the need for transportation to provider offices. For example, an official from one grantee told us they have provided mental health services to children who live in the more rural sections of the greater New Orleans area, for whom travel time to services could be a significant barrier to obtaining care. Competing family priorities. Federal programs provide funding that is used to alleviate conditions that create competing family priorities— including dealing with housing problems, unemployment, and financial concerns—to help families more easily obtain children’s mental health services. Federal programs address competing priorities, in part, by providing case management, information, and referral services, which can help families identify and obtain services such as health care, housing assistance, and employment assistance. For example, the 2006 SSBG supplemental funding supported over 25,000 case management services to children in Louisiana from July 2006 through September 2008. In addition, officials from a local organization that received funding from ACF’s Head Start told us that the program had provided families with information and referrals for mental health services. HUD’s and FEMA’s Disaster Housing Assistance Program provided case management services, which included social services such as job training and referrals for mental health services, in addition to rental assistance to certain families displaced by Hurricanes Katrina and Rita. The program ended on March 1, 2009, but program clients in Louisiana will continue to receive services through a transitional program through August 31, 2009. Federal programs also address competing family priorities by providing direct financial assistance, which may help alleviate family stress and make it easier for families to devote resources and effort to obtaining mental health services for their children. For example, the Metropolitan Human Services District uses federal funding from the Community Mental Health Services Block Grant to give financial assistance for utilities, rent, and school uniforms to families of children who have certain mental health disorders, or to provide family stabilization services to help keep these children in their homes. In addition, the Louisiana state program that uses the SAMHSA Child Mental Health Initiative grant provides time- limited funding for tutoring, school uniforms, and other expenses when they are a part of an individualized service plan for children with diagnosed mental health disorders. Concern regarding stigma. An official from one of the National Child Traumatic Stress Initiative grantees in the greater New Orleans area told us that because school systems they have worked with have integrated the delivery of mental health services into the schools, the stigma associated with mental health services has decreased. In addition, some federal programs support the provision of education services, which the President’s New Freedom Commission on Mental health reported can reduce stigma associated with mental health services. For example, in 2008 FEMA’s and SAMHSA’s CCP program provided information about counseling services through a media campaign that included billboards, television commercials, and print and radio advertisements. SAMHSA’s State/Tribal Youth Suicide Prevention Grants provided suicide prevention and education services through a 2007 media campaign that included busboards, radio public service announcements, and print advertisements throughout the greater New Orleans area. Sustainability of funding. Although most of the federal programs we identified were not established as a direct result of Hurricane Katrina, the programs that are hurricane-related have been an important source of support for mental health services for children in the greater New Orleans area. However, much of this funding is temporary. For example, three hurricane-related grant programs—CMS’s PCASG and Professional Workforce Supply Grant and ACF’s 2006 SSBG supplemental funding— will no longer be available to grantees after 2010. Although the PCASG was created with the expectation that providing short-term financial relief would significantly increase the likelihood of the PCASG fund recipients’ sustainability, and PCASG recipients were required to prepare sustainability strategies as part of their application, it is too early to know whether these organizations will achieve sustainability. Since Hurricane Katrina, the number of SBHCs in the greater New Orleans area has increased. At the start of the 2005-06 school year, there were seven SBHCs providing mental health and other primary care services to children in the greater New Orleans area. Most of these SBHCs were closed as a result of damage from Hurricanes Katrina and Rita, and the ones that remained open had also sustained damage. During the 2007-08 school year, there were nine SBHCs in the greater New Orleans area, and state officials told us in February 2009 that at least four more SBHCs were in the planning stages for this area. Louisiana’s SBHCs receive their funding from several sources. The LDHH Office of Public Health, which oversees SBHCs in the state, provides some state funding. There is no federal program whose specific purpose is to support SBHCs, but LDHH and local providers have used funding from various federal sources to support SBHCs. For example, a state official told us that the Office of Public Health has used a small portion of LDHH’s annual Maternal and Child Health Services Block Grant from HRSA to support SBHCs. Some organizations that support SBHCs in the greater New Orleans area have also received temporary funding, such as from the PCASG and the hurricane-related SSBG supplemental funding. In addition, the Jefferson Parish Human Services Authority, which provides mental health services at SBHCs, has received funding allocated by LDHH’s Office of Mental Health from SAMHSA’s Community Mental Health Services Block Grant. Furthermore, providers at some SBHCs told us they could receive Medicaid reimbursement for some mental health services, including those related to psychiatric care. State officials told us that although CMS permitted the reimbursement of social work services provided at SBHCs, the Louisiana Medicaid program had not provided reimbursement for social work services because of state funding constraints. Some SBHCs may also obtain funding from nonprofit organizations. For example, grant funding from the W.K. Kellogg Foundation was significant in the rebuilding and expansion of SBHCs after Hurricane Katrina. Because Louisiana requires SBHCs to have mental health staff on-site, SBHCs can be an access point for children who need mental health services in the greater New Orleans area. Furthermore, some SBHCs in the area have a psychiatrist on staff on a part-time basis. During the 2007-08 school year, the need for mental health services was the primary reason for almost one-quarter of students’ visits to SBHCs in the greater New Orleans area. In addition, SBHC health care providers told us that students who visited the SBHCs for other reasons may have also received mental health services. SBHCs in the greater New Orleans area have emerged as a key approach to addressing the top three barriers to obtaining services identified in our structured interviews—a lack of transportation, competing family priorities, and concern regarding stigma. SBHCs are generally located in schools or on school grounds, which reduces students’ need for transportation to obtain care. The SBHCs in Jefferson Parish serve students on multiple school campuses, and students in schools not colocated with an SBHC can be transported when necessary. SBHC services may be provided at low or no cost to the patient, which lessens the financial burden on the family. The location of SBHCs in schools or on school grounds also reduces the need for a parent to take time off from work to accompany a child to appointments. In addition, colocation of mental health and other primary care services may reduce concern regarding stigma because the type of service the child is receiving at the SBHC is generally not apparent to an observer. One SBHC provider told us that offering mental health services in the same location as other primary care services “demystifies” mental health services and eliminates the perception that they are separate from primary care services. Officials at SBHCs told us they were working to obtain additional funding to help achieve long-term sustainability of the clinics. Officials from the Metropolitan Human Services District told us that it would not be possible for every school to have an SBHC, but that they were working on an initiative with other local organizations and school districts to develop a “hub” system to deliver health care services, including mental health services, to children in the greater New Orleans area. Under the planned pilot program, individual SBHCs or other community clinics would become hub clinics that would serve 10 feeder schools, 6 of which would be served by 2 mental health providers funded by the Metropolitan Human Services District, and 4 of which would be served by mental health providers funded by other organizations. Children needing services beyond those provided by their school mental health provider or nurse could be referred to the hub clinic. Officials planned to begin hiring school nurses and mental health providers for the feeder schools by July 2009. We provided a draft of this report to HHS and Education for their review. HHS provided comments on two key issues. HHS’s comments are reprinted in appendix IV and discussed below. In addition, both HHS and Education provided technical comments. We incorporated HHS and Education comments as appropriate. In its comments, HHS stated that our draft report focused too heavily on SBHCs, to the exclusion of other models of care. HHS noted that the school systems in the greater New Orleans area have been very receptive to the direct provision of mental health services in schools, because of the psychological difficulties experienced by school children due to distress related to Hurricane Katrina. HHS supplied additional information on SAMHSA’s National Child Traumatic Stress Initiative’s two grantees in the greater New Orleans area, which provide mental health services in schools. We highlighted SBHCs in our draft report because they have emerged as a key approach to serving children in the greater New Orleans area, due in part to the state’s use of federal funds to support this model of care. Our discussion of SBHCs in the greater New Orleans area is not intended to imply that they are the only model for providing school-based mental health services to children, and we have added additional information to our report on the National Child Traumatic Stress Initiative grantees. HHS also commented that many SBHCs do not provide mental health services, and that those that do provide them may not have staff who can provide more intensive services. However, as our draft indicated, all SBHCs in Louisiana are required to have a mental health provider on staff and therefore can be a valuable resource for children seeking mental health services. We have also added information to the report indicating that some SBHCs in the greater New Orleans area have a psychiatrist on staff on a part-time basis. HHS commented that our draft report minimized housing problems faced by children and families in the greater New Orleans area in our discussion of barriers to obtaining mental health services; HHS also stated that the lack of stable housing in the area is one of the greatest barriers to children’s mental health recovery. We disagree that the draft report minimized the role of housing problems. Our findings were based on barriers identified by stakeholders, who described what they believed to be the greatest barriers to families obtaining mental health services for children. The draft report included information related to housing problems in greater New Orleans in our discussion of competing family priorities, which tied as the second most frequently identified barrier to obtaining mental health services for children. However, we added information to the report to emphasize that housing problems may affect children’s mental health. In its comments, HHS also provided additional information on SAMHSA’s Child Mental Health Initiative, which we have incorporated as appropriate. We also expanded our description of FEMA’s and SAMHSA’s CCP in our appendix on federal programs in response to HHS’s comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Secretary of Education, and appropriate congressional committees. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or bascettac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. We have estimated that about 187,000 children through age 17 were living in the greater New Orleans area during 2008. To arrive at this estimate, we calculated the total enrollment for all public and private schools in the greater New Orleans area by adding the number of public school students as of fall 2008 (89,178) to the number of private school students reported for the 2008-09 school year (41,188). About 130,366 children were enrolled in public and private schools in the greater New Orleans area for the 2008- 09 school year, which was 70 percent of pre-Katrina enrollment (186,530 in the 2004-05 school year). However, school enrollment data underestimate the total child population, as they do not include all children younger than school age. Therefore, we generated our estimate by adding the total enrollment data to birth data for 2004 through 2008. (See fig. 1.) Table 3 is a list of the federal programs in our review that have been used to support the provision of mental health or related services to children in the greater New Orleans area. The list includes 9 formula grant programs that support the provision of mental health services through noncompetitive awards to the state based on a predetermined formula, and 13 discretionary grant programs that support services that address at least one of the identified barriers to providing and obtaining mental health services for children. It was not possible for us to calculate a total amount of federal funding allocated or spent to support mental health services to children in the greater New Orleans area or the total number of children served through federal programs because of a lack of comparable data among federal and state agencies and individual programs. Figure 2 is a list of selected services supported by the federal programs in our review for children in the greater New Orleans area. In addition to the contact named above, Helene F. Toiv, Assistant Director; Elan Martin; Roseanne Price; Julie L. Thomas; Laurie F. Thurber; Jennifer Whitworth; Malissa G. Winograd; and Suzanne Worth made key contributions to this report. Hurricane Katrina: Federal Grants Have Helped Health Care Organizations Provide Primary Care, but Challenges Remain. GAO-09-588. Washington, D.C.: July 13, 2009. Disaster Assistance: Greater Coordination and an Evaluation of Programs’ Outcomes Could Improve Disaster Case Management. GAO-09-561. Washington, D.C.: July 8, 2009. Disaster Assistance: Federal Efforts to Assist Group Site Residents with Employment, Services for Families with Children, and Transportation. GAO-09-81. Washington, D.C.: December 11, 2008. Catastrophic Disasters: Federal Efforts Help States Prepare for and Respond to Psychological Consequences, but FEMA’s Crisis Counseling Program Needs Improvements. GAO-08-22. Washington, D.C.: February 29, 2008. School Mental Health: Role of the Substance Abuse and Mental Health Services Administration and Factors Affecting Service Provision. GAO-08-19R. Washington, D.C.: October 5, 2007. Hurricane Katrina: Status of Hospital Inpatient and Emergency Departments in the Greater New Orleans Area. GAO-06-1003. Washington, D.C.: September 29, 2006. Hurricane Katrina: Status of the Health Care System in New Orleans and Difficult Decisions Related to Efforts to Rebuild It Approximately 6 Months after Hurricane Katrina. GAO-06-576R. Washington, D.C.: March 28, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Mental Health Services: Effectiveness of Insurance Coverage and Federal Programs for Children Who Have Experienced Trauma Largely Unknown. GAO-02-813. Washington, D.C.: August 22, 2002.
The greater New Orleans area--Jefferson, Orleans, Plaquemines, and St. Bernard parishes--has yet to fully recover from the effects of Hurricane Katrina. As a result of the hurricane and its aftermath, many children experienced psychological trauma, which can have long-lasting effects. Experts have previously identified barriers to providing and obtaining mental health services for children. The Department of Health and Human Services (HHS) and other federal agencies have supported mental health services for children in greater New Orleans through various programs, including grant programs initiated in response to Hurricane Katrina. GAO was asked to study the federal role in addressing barriers to these services in greater New Orleans. In this report, GAO (1) identifies barriers to providing and to obtaining mental health services for children in greater New Orleans, and (2) describes how federal programs, including grant programs, address such barriers. To do this work, GAO used a structured interview and a written data collection instrument to gather views on barriers from 18 state and local stakeholder organizations selected on the basis of experts' referrals and the organizations' roles in children's mental health. To learn how federal programs address these barriers, GAO reviewed documents from and interviewed federal, state, and local officials involved in providing mental health services to children. GAO's work included a site visit to greater New Orleans. Stakeholder organizations most frequently identified a lack of mental health providers and sustainability of funding as barriers to providing mental health services to children in the greater New Orleans area; they most frequently identified a lack of transportation, competing family priorities, and concern regarding stigma as barriers to families' obtaining services for children. Fifteen of the 18 organizations identified a lack of mental health providers--including challenges recruiting and retaining child psychiatrists and psychologists--as a barrier to providing services to children. Thirteen organizations identified sustainability of funding, including difficulty securing reliable funding sources, as a barrier to providing services. A lack of transportation was most frequently identified--by 12 organizations--as a barrier to families' ability to obtain services for their children. The two second most frequently identified barriers to obtaining services were competing family priorities, such as housing problems and financial concerns, and concern regarding the stigma associated with receiving mental health services. A range of federal programs, including grant programs, address some of the most frequently identified barriers to providing and obtaining mental health services for children, but much of the funding they have supplied is temporary. Several federal programs support state and local efforts to hire or train mental health providers. For example, HHS's Professional Workforce Supply Grant has resulted in recruitment and retention incentives to mental health providers in the greater New Orleans area. Several HHS programs allow funding to be used to transport children to mental health services, including Medicaid and the 2006 Social Services Block Grant (SSBG) supplemental funding provided to Louisiana. However, much of the funding, including that from the Professional Workforce Supply Grant and the supplemental SSBG, is hurricane-related and will no longer be available after 2010. School-based health centers (SBHC) have emerged as a key approach in the area to address barriers to obtaining mental health services for children, and although there is no federal program whose specific purpose is to support SBHCs, state programs have used various federal funding sources to support them. For example, a Louisiana official told us funds from HHS's Maternal and Child Health Services Block Grant and Community Mental Health Services Block Grant support SBHCs in greater New Orleans. SBHCs address the transportation barrier because they are located on school grounds, and they help families by reducing the need for a parent to take time off from work to take a child to appointments. In addition, because SBHCs provide both mental health and other primary care services, the type of service a child receives is not apparent to an observer, which may reduce concern about stigma. In commenting on a draft of this report, HHS provided additional information on mental health services provided in schools other than through SBHCs and emphasized the effect of a lack of stable housing on children's mental health. HHS also provided technical comments. GAO incorporated HHS's comments as appropriate.
Financing elementary and secondary education requires a large amount of money; in school year 1993-94, total expenditures in all U.S. elementary and secondary schools totaled an estimated $285 billion. In most of the 50 states, education is the largest single expenditure category in the state budget, accounting for 20.3 percent of total state spending in fiscal year 1994. Elementary and secondary schools receive most of their funds from state and local revenues. Federal aid has mainly focused on providing services to educationally disadvantaged children through categorical, program-specific grants. In school year 1992-93, state and local shares of education spending were almost equally divided at 45.6 percent (or $113 billion) and 47.4 percent (or $118 billion), respectively, while the federal share was 6.9 percent (or $17 billion). Disparities in the distribution of education funds can occur because of the method that states use to finance their public elementary and secondary schools. In most states, localities provide a major share of school funding, which is generally raised through the property tax (see table 1 for the funding sources in the states we analyzed). Because property wealth is not equally distributed among school districts, however, the heavy reliance on the local property tax produces disparities in districts’ ability to raise education revenues. Since the early 1970s, these disparities have led poor districts in more than 40 states to challenge the constitutionality of their state’s school finance system. More than half of the state systems have been challenged since 1989—some for the second time. In most cases less wealthy districts charge that the state school finance system violated the state’s constitution under one or two provisions—the education clause or the equal protection clause. All states have an education clause or some provision in their constitution that requires the creation of a public school system. These clauses vary—some simply call for the creation of an education system, and others call for such systems to be, for example, “thorough and efficient” or “general and uniform.” The courts in each state must interpret the substantive meaning of such clauses, on the basis of their wording and principles of statutory interpretation. Equal protection clauses in state constitutions require that all individuals in similar situations be treated similarly. A suit brought under this provision must allege that an individual is being classified, or treated differently, by the state. In invoking the equal protection clause, claimants may, for example allege that the state discriminated against low-property wealth districts in providing education funds. In states where the highest court has found public education to be a fundamental state right, a standard of strict scrutiny applies. For a school finance statute to meet the standard of strict scrutiny, the court must be satisfied that a compelling government interest is at stake, a less discriminatory method to meet it does not exist, and the classification in the legislation is necessary to achieve a compelling government interest. Determining the equity of a state’s school finance system, according to school finance experts, requires policymakers to consider the following four issues. First, policymakers must decide who is to benefit from an equitable school finance system, taxpayers or public school students. Second, the object that is to be equitably distributed must be determined. These objects include educational revenues or key educational resources, such as curriculum and instruction, or outcomes, such as student achievement. Third, the principle to be used to determine whether the distribution is equitable must be chosen. School finance experts have identified four principles for defining equity: (1) horizontal equity, in which all members of the group are considered equal; (2) vertical equity, in which legitimate differences in resource distributions among members of the group are recognized—for example, given childrens’ differences, some students deserve or need more educational services than others; (3) equal opportunity, also known as fiscal neutrality, which means that differences in expenditures per pupil cannot be related to local school district wealth; and (4) effectiveness, which assesses the degree to which resources are used in ways that research has shown to be effective. The effectiveness principle suggests that a resource inequity exists not only when insufficient resources are available but also when resources are not used in ways that produce desired impacts on student performance. The fourth issue, according to experts, is to determine what statistic will be used to measure the degree of equity in the school finance system. Many statistical measures exist for this purpose; one such measure is the federal range ratio. Using the effectiveness principle to define equity requires knowledge of the use of education dollars to achieve certain desired student outcomes. However, the relationship among money, quality, and student achievement is not well understood. As a result, school finance experts report that resources are not always used in ways that strengthen teaching and learning. As an expert with the Education Commission of the States recently suggested, better information is needed about what resources are necessary to create successful schools, what programs and services are valuable investments, and which ones result in the biggest payoff for students. All three states that we reviewed revised their school finance system at least partly in response to lawsuits, although the suits’ outcomes differed initially. In each state, less wealthy districts filed suit claiming that disparities in the districts’ access to education revenues violated the state constitution. In Tennessee and Texas, the state supreme courts declared the school finance systems unconstitutional; in Minnesota, the state supreme court upheld the system because the disparities in revenue did not preclude the plaintiff districts from providing an education that met the state’s basic requirements. Nonetheless, the Minnesota State Legislature changed the system. The lawsuits in all three states focused on disparities in access to revenues. The greatest such disparity was in Texas, where the most wealthy district had $14 million in taxable property wealth per pupil, and the least wealthy district had $20,000 per pupil. Revenue disparities between the wealthiest and least wealthy districts were not necessarily due to the unwillingness of less wealthy localities to tax themselves. In Minnesota, for example, the plaintiffs’ attorney said that the plaintiffs’ analysis showed that the wealthiest 10 percent of districts in 1989 raised on average six times more revenue through a levy program than the poorest 10 percent even though tax rates for the poorest districts were on average 25 percent higher. In Tennessee and Texas, the courts found that the differences in taxable wealth among districts created disparities in spending per pupil, which in turn led to such disparities in educational opportunities that some districts could not meet the state’s basic education course requirements. For example, in Tennessee, the disparity in total current funds available per pupil in 1987 (1 year before the lawsuit) meant that some districts spent more than twice as much as others—$1,823 per pupil compared with $3,669 per pupil. The disparity in funding, the Tennessee court concluded, deprived students in the plaintiff schools of equal access to adequate educational opportunities such as laboratory facilities; computers; current textbooks; buildings; and music, art, and foreign language courses. Only one court of the three—the Tennessee Supreme Court—linked inadequate funding of the plaintiff schools and educational outcomes. When the Tennessee suit was filed, only 7 percent of the elementary schools and 40 percent of the secondary schools in the state’s 10 poorest districts were accredited by the Southern Association of Colleges and Schools, compared with 66 percent and 77 percent, respectively, of the 10 richest districts. Students in the plaintiff schools, the court observed, had poor standardized test results and a higher need for remedial courses in college. In Texas and Tennessee, the state supreme courts initially found the states’ school financing systems unconstitutional; subsequent legislative action led the state supreme courts to eventually uphold the revised finance systems as constitutional. Before the Minnesota Supreme Court’s finding the school finance system constitutional, the legislature had already made further changes to equalize more of the state education revenue. (See table 2.) Regarding the two states whose systems the courts found unconstitutional, the Texas court’s judgment was more prescriptive: It called for districts to have substantially equal access to similar revenue per pupil at similar levels of tax effort. This ruling followed the court’s finding that a concentration of resources in very wealthy districts taxing at low rates should not exist while less wealthy districts taxing at high rates cannot generate enough revenue to meet even minimum educational standards. Holding that the state’s school financing system violated a provision of the state constitution requiring educational efficiency, the court concluded that districts must have substantially equal access to similar revenue for similar levels of tax effort—regardless of property wealth. The Tennessee court found that the school finance system failed to provide substantially equal educational opportunities to all students, leaving it to the legislature to devise a remedy. The remedies developed by the legislatures used one or more of the following options: (1) added new money to the school finance system to increase funding in poorer districts (“leveled up”), (2) redistributed the available resources by modifying the school finance formulas, (3) limited the local revenues in very wealthy districts (“leveled down”), or (4) recaptured local revenue from wealthy districts and redistributed it to poor districts. Each state approached crafting its solution differently (see table 3 and apps. III, IV, and V). Of the three states in our study, only Tennessee raised new revenues to improve access to revenues (leveled up), without placing limits on local contributions to education. Tennessee also revised its funding formulas by enacting a new school finance program in 1992. The program funded local school districts on the basis of the cost of providing a basic educationand determined the local share of this cost on the basis of a locality’s fiscal capacity. To finance the estimated $665 million needed to fund this program, the Tennessee State Legislature passed a half-cent increase in its state sales tax and earmarked the new revenues for education. Under this program, every district received more funding per pupil than it would have under the old system, with less wealthy districts, as measured by their fiscal capacity, receiving proportionately more than wealthy districts. Texas and Minnesota redistributed state funds by changing the funding criteria in state aid formulas to favor low property wealth districts—without raising new state revenues. For example, Texas agreed to provide additional state aid to those districts whose per pupil property wealth was below $205,500 and who were willing to raise their property tax rate above the minimum required—$.86 per $100 of property value—up to a maximum of $1.50 per $100 of property value. Similarly, Minnesota chose to increase its aid to districts with low property wealth by agreeing to equalize a portion of the amount that districts raised through their optional operations levy. Furthermore, both Texas and Minnesota chose to limit local contributions (leveled down). In Minnesota, one education finance committee legislator said that the limit placed on levy revenue was as much a response to tax relief demands as it was an equalizing measure. In Texas, however, where property wealth of districts varied greatly, legislators set limits to meet the court mandate of “substantially equal access to similar revenue per pupil at similar levels of tax effort.” The state could not afford to close the disparity in access to revenues by leveling up the revenue in less wealthy districts. The state, therefore, limited local contributions in two ways. First, the legislature limited the amount of revenue available to districts by “capping” the property tax rate at which localities could tax themselves. Second, the state chose to limit the taxable property value available per pupil to $280,000. Districts whose per pupil property values exceeded this threshold could choose one of five options to reduce their taxable wealth (recapture provision). Very wealthy districts generally chose one of two options to reduce their taxable wealth—writing a check to the state or to a less wealthy district. The legislative reforms had to account for budgetary and political pressures. Budget pressures were related to competing demands of other budget sectors and to the growing student populations with special needs. The political pressures were related to concerns about the reforms’ impact on taxes, the funding levels of high-spending districts, and maintaining local control of schools. In response to such pressures, legislatures in each state included hold-harmless and win-win provisions. Legislators had to negotiate changes to state finance systems in a fiscal environment in which education and other costs were growing rapidly. On the basis of our analysis of state budget data, we found that Medicaid and corrections expenditures in all three states had grown substantially in the years before and during education finance reform. Medicaid spending had more than doubled in each state during that period. Although corrections spending had increased in all states, Texas’ corrections spending increased most dramatically—by 135 percent from 1991 to 1995. We also found that these three states had similar education-related budget pressures from growing populations of special needs students, such as those requiring remedial and bilingual programs and those with disabilities, who are traditionally more expensive to serve. Other pressures were essentially political, reflecting the public’s antitax sentiments and concerns for local education programs and for local control. In all three states, the reform process reflected the public’s antitax sentiment, which precluded some proposed solutions. For example, in Tennessee, attempts to increase revenues for schools by implementing a first-time, broadly based income tax were defeated. Likewise, a statewide property tax in Texas was defeated. The solutions that were adopted also reflected antitax sentiments. To build the business community’s support for a half-penny increase in Tennessee’s sales tax to fund finance and other education reform, for example, the legislature included accountability measures ensuring that the new money would be used to purchase resources deemed important to improving education. Minnesota legislators limited the amount of per pupil revenue raised through a levy program based on property tax to be more fair to property owners as well as to limit (level down) local spending on education. Legislators had to respond to concerns from wealthy districts that sudden reductions or limitations in revenue would unfairly harm their education programs. As a consequence, Texas and Minnesota allowed hold-harmless exceptions in some wealthy districts either temporarily or permanently. These exceptions allowed districts to retain their educational revenues (Minnesota) or keep spending levels at existing levels by raising taxes (Texas). Similarly, a Tennessee legislator said that to build support for passage of the new school finance law, it was important to show that all districts would benefit from the new financing scheme. Local control was another major concern to legislators in two states. In Texas, consolidating tax bases among districts to facilitate sharing property tax revenues was vehemently opposed by those whose community identity was closely linked to district boundaries. Instead of requiring tax base consolidation, the legislature ultimately encouraged it as one of five options available to wealthy districts for redistributing their excess wealth. Allowing for local control of spending was also important in Tennessee, where the new school finance plan provided much more flexibility in spending compared with the old. Officials we interviewed reported that reforms to school finance systems have improved equity for less wealthy districts in terms of access to revenues, per pupil spending levels, or educational opportunities. Further, state education finance reports and our analyses of district spending data support their conclusion. Officials in all three states reported improved access to revenues in less wealthy districts. We analyzed Minnesota state school spending data and found the following to support this view: in 1988, a less wealthy district generated expenditures of $59.57 per pupil for every percent of tax levied, compared with $84.98 per pupil for every percent of tax levied in a wealthy district. By 1992-93, however, the per pupil expenditures for every percent tax levied were essentially equal in less wealthy and wealthy districts. In Texas, where the goal was to improve access to revenues by decoupling a district’s education revenue from its property wealth, a legislative analysis showed that tax effort rather than property wealth now explains the greatest amount of variance in a district’s ability to raise revenues. In 1989, property wealth alone explained almost 70 percent of the variation in a district’s ability to raise revenue, while a district’s tax rate explained only 13.5 percent. By 1995, property wealth explained only about 23 percent of the variation; tax effort explained almost 51 percent. Both Texas and Tennessee reported diminished disparities in per pupil spending levels across districts, meaning that spending had increased in the less wealthy districts in each of these states. For example, a Tennessee school finance report showed that the disparity in per pupil expenditures between high- and low-spending districts dropped from 84 percent before the implementation of the 1992 finance system to 74 percent 1 year after implementation. In our analysis of Tennessee school district funding data, we found that, the average total per pupil funding ($2,476) in the small, rural districts involved in the Tennessee lawsuit in the fiscal year before the 1992 implementation of Tennessee’s new school finance system increased by 31.93 percent to $3,254 by fiscal year 1994-95. The remaining school districts experienced a 23.10-percent increase in their average total funding per pupil in the same period, with the average per pupil funding increasing from $3,117 to $3,782. In Texas and Tennessee, where disparities in educational opportunities were such an issue that certain districts could not meet basic education requirements, officials we interviewed reported improvements in the learning opportunities for pupils in less wealthy districts. For example, in Tennessee, a former legislator said that small, rural districts now have art and music teachers, which they never had before, and can now offer courses that will better prepare their students for college. A 1994-95 Tennessee Department of Education budget report shows that districts spent the greatest share, about 80 percent, of the approximately $275 million in new funds made available to date on classroom-related expenses, particularly the hiring of new teachers to reduce kindergarten through eighth grade class sizes. Although officials we interviewed noted improvements in equity and other aspects of their state school finance systems, they also shared concerns about their systems’ sustaining these gains. Further, because of ongoing pressures from various interest groups, they observed that their revised systems already have been modified and are likely to face further alterations. These officials noted the following concerns: continued reliance on local property taxes to finance education spending may lead to a tax backlash, features in the finance formula may not allow districts with high numbers of disadvantaged students to keep pace with rising educational costs, and districts may continue to press for changes in the school finance system. Observations made by Tennessee officials were typical of those regarding antitax sentiment. In Tennessee, where localities are required to pay a prorated share toward the cost of their district’s basic education, a state school board official questioned the willingness of localities to raise their taxes to keep pace with rising expenses such as teacher salaries. One Tennessee education official said that, to avoid raising property taxes to pay for an increased local contribution, constituents may exert pressure to change the formula to reduce the local share, which may make it more difficult for the state to ensure the adequate and equitable financing of education needs in all districts. In Minnesota, state education officials reported that districts are finding it increasingly difficult to pass optional levies to help pay for school operations—even since 1991, when the state increased its equalizing aid to districts that passed the levies. Minnesota officials view this development as possibly exacerbating the disparities among districts’ access to revenues. Officials in all three states expressed concern about the ability of their state’s school finance system to meet growing educational costs. For example, Texas education officials that we interviewed expressed concern about the impact of limiting the tax rate to $1.50 per $100 of property value. As more districts reach the $1.50 ceiling, their ability to increase spending to meet increased costs will be severely limited unless the state provides additional funding or the tax ceiling is raised. An official with an association that monitors the equity of Texas’ school finance system said that “such a ceiling may be especially burdensome in less wealthy districts where the numbers of disadvantaged children with costly educational needs are growing rapidly.” Similarly, an official with a large urban district in Minnesota said that “the state’s cost adjustment factors are not adequate to fund the educational services needed for its disadvantaged students.” He said that his district, which is experiencing large increases in the number of disadvantaged children, “is considering suing the state to obtain increased funding.” Districts dissatisfied with the revised school finance systems have used legal or legislative means to change the systems. In a 1995 Tennessee Supreme Court case challenging the constitutionality of the state’s new school finance system, the Tennessee Small School Systems claimed that the new funding scheme was unconstitutional because equalization would occur over several years and the plan included no provision to equalize or increase teacher salaries. In its February 1995 opinion, the state supreme court upheld the plan’s constitutionality, accepting the state’s argument that complete equalization of funding can best be accomplished incrementally, but found that the plan’s failure to provide for the equalization of teachers’ salaries was a significant defect which, if not corrected, could put the entire plan at risk. The 1995 Tennessee State Legislature has since appropriated $7 million of the estimated $12 million needed to equalize increases in teacher salaries. In Texas and Minnesota, which both placed limitations on local contributions to education, officials reported that legislators face ongoing attempts by wealthy districts to change the school finance system. For example, according to officials in Texas, wealthy districts in Texas, unhappy over the recapture clause in the revised school finance system, convinced the Texas State Legislature to somewhat offset the payment they must make. The 1995 Texas State Legislature modified the school finance law to allow wealthy districts paying recaptured funds directly to the state to reduce the amount owed by the lesser of 4 percent or $80 per credit purchased. A Texas education advocate said that provisions such as these subsidize wealthy districts at the expense of less wealthy districts. Because many other states face the prospect of reforming their school finance systems, we asked state officials what advice they had for other states. We found three common themes among their recommendations: (1) states should first clearly define equity goals in terms of the funding level needed to ensure adequate learning resources for all students or the funding needed to ensure a certain level of student performance, (2) states should link school finance reform to accountability, and (3) the reform process should be inclusive and encourage the participation of all groups affected by such reform. Officials in all three states suggested that states clarify the equity goals of a school finance system by defining such goals in terms of adequate learning resources or student performance standards. The general concern underlying this advice is that states need to know what they are purchasing with their education dollars. Officials suggested, for example, linking the amount of funding to either the level of resources needed to adequately meet the educational needs of students or to a certain level of student performance. Officials also urged states to use student performance standards to provide accountability for increased spending on education. Officials said that the accountability was needed to convince parents and taxpayers that the increased spending would lead to improved student performance. In discussing and developing proposals to reform the school finance system, the officials we talked to suggested including all interested parties with a stake in education, such as parents, teacher unions, school officials, and business community representatives. They also suggested that it is important for members from the legislative and executive branches of government and the different political parties to collaborate in crafting a workable solution. Officials in all three states provided examples of how they involved diverse groups in revising their school finance legislation. For example, one Minnesota official said that because the property tax system should work in conjunction with the education finance system, members of the primary and secondary education finance committees needed to include members of the legislature’s tax committees in drafting proposals to change the education finance system. The experience of the three states we studied suggests that reforming school finance systems to make them more equitable is complex and difficult. Legislative solutions have had to be sensitive to taxpayers’ concerns about increased taxes and to concerns of wealthy districts that want to maintain existing spending levels. When these concerns are sufficiently considered to gain passage of finance reforms, disparities in education funding can be reduced and educational opportunities in poor school districts improved. However, such negotiated solutions are fragile, making efforts to achieve equity continuous and likely to require periodic adjustments in school finance systems as student demographics and economic conditions change. Solutions may use several options to increase the education funding in poor districts, such as generating new revenues or redistributing existing revenues. In states that choose to generate new revenues, it appears to be important to include accountability provisions to help convince taxpayers that the new investment in education is worthwhile. The Department of Education reviewed a draft of this report and had no comments. In addition, we provided state-specific information to state officials for verification and incorporated their technical suggestions as appropriate. We are sending copies of this report to appropriate House and Senate committees and other interested parties. Please call Eleanor L. Johnson on (202) 512-7209 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. The objectives of this study were to characterize, for each state reviewed, (1) the reforms to the school finance systems and the legal, budgetary, and political pressures the state legislatures faced in making the reforms and (2) the general impact of the legislative remedies, especially in addressing disparities in educational funding. We also determined what advice state officials could provide for other states similarly reforming their school finance systems. To answer these questions, we conducted case studies of three states—Tennessee, Texas, and Minnesota. We selected these states for in-depth review because they had recently reformed their school finance systems and differed substantially from each other in (1) the approaches taken to revise their finance systems and (2) such demographic factors as poverty and student enrollment rates. For example, the three selected states collectively illustrate four different strategies for equalizing education finances—increasing revenues, redistributing revenues, limiting the contributions of localities to education, and recapturing funds from wealthy districts and redistributing them to less wealthy districts. With regard to demographic factors, the 1992 poverty rates in the three states ranged from 12.8 percent in Minnesota to 17.8 percent in Texas, with the national rate at 14.5 percent. The percent change projected in student enrollment from 1990 to 1993 ranged from 1.5 percent in Texas to 4.2 percent in Minnesota, with an overall projected increase for the nation at 4.3 percent. Analyzing the school finance systems in three states with such broad variation increases the likelihood that findings common to all three states would be relevant to other states trying to make their school finance systems more equitable. For each state selected, we reviewed school finance documents and analyzed data on state budgets, student demographics, and school district funding levels. We also interviewed 15 to 19 individuals in each state who represented a variety of education interests (see app. II). In conducting the case studies, we primarily relied on the opinions of the officials we interviewed and the supporting documentation they provided. To select states to study, we first asked experts in education finance to identify states that had implemented finance equity reforms. Then, for each state identified, we contacted state education officials or reviewed relevant materials on the state’s school finance system to obtain information on the state’s finance formulas, school finance legislation, revenue-raising strategies, and limitations, if any, on discretionary spending on local districts. We also considered each state’s demographic makeup, reviewing such factors as the concentration of poverty and public school enrollment and growth rates. We asked six national education finance experts to nominate states that had revised their school finance systems to address inequities in spending and had begun to implement the revised systems. The experts collectively nominated 18 states: California, Florida, Indiana, Kansas, Kentucky, Massachusetts, Maryland, Michigan, Minnesota, Missouri, Nebraska, New Mexico, South Carolina, Tennessee, Texas, Vermont, Washington, and Wisconsin. To further refine the selection of states, we contacted state education finance officials or reviewed relevant materials on the state’s school finance system to determine the following for each nominated state: (1) when the state passed legislation to equalize education spending, (2) the revenue sources used to finance the equalization effort, (3) the type of allocation formula used to distribute funds to the state’s public schools, (4) whether the state operated under a tax or spending limit, and (5) whether any limits were placed on local contributions to education. We also used the most recent U.S. Department of Education National Center for Education Statistics data to obtain the 1990 and 1991 state, local, and federal share of education spending in each state. Using this additional information, we first reduced the 18 states to 8 using the following criteria: To ensure that state officials would be able to recollect the circumstances surrounding the passage of the school finance reform legislation, we eliminated eight states that either had not passed such laws or had passed them before 1990: California, Florida, Maryland, New Mexico, South Carolina, Vermont, Washington, and Wisconsin. To ensure that school finance reforms had been in place long enough to allow us to study their effects, we eliminated one state (Michigan) where voters had only recently (March 1994) approved an increase in the general sales tax to fund a new school finance system passed by the legislature in December 1993. To ensure that we would study states that were not already the subject of many studies and on the advice of one of the six experts, we eliminated one state (Kentucky). Of the remaining eight states, we then judgmentally selected three that differed in their approaches to equalizing their school finance systems and differed substantially among certain demographic factors. We used the most recent Bureau of the Census data to obtain demographic information such as the 1992 poverty rate, recent shifts in kindergarten through 12th grade enrollment, and ethnicity. For each state selected, we reviewed school finance documents and analyzed data on state budgets, student demographics, and school district funding levels. We analyzed (1) the state’s school finance system, laws, formulas, and spending patterns; (2) equity lawsuits dealing with state school finances; (3) state budget data for state spending and for education to determine the budget pressures operating when the state was considering school finance reform legislation; (4) student demographic data to identify any relevant trends in target populations, such as special education students, whose educational costs may be generally higher than average; and (5) school district spending data to verify where possible the impact new reforms have had on reducing funding disparities. We did not attempt to determine what impact the school finance reform had on improving student performance. We interviewed 15 to 19 individuals in each state who represented a broad array of interests in elementary and secondary education. Specifically, to obtain information and opinions on the state’s effort to equalize education funding, we interviewed legislators; officials in the state education agency and the state board of education; state attorneys; state budget officials; and representatives of statewide education associations, such as teacher unions; associations for school administrators; and parent-teacher associations. We also interviewed individuals knowledgeable of both the plaintiffs’ and the states’ interests in the school finance equity lawsuits. See appendix II for a list showing the affiliation and position of the individuals we interviewed in each state. The interviews were open ended. Major questions covered, but were not restricted to, the following subjects: (1) the problems placing the biggest demands on the state budget and their impact on funding for kindergarten through 12th grade education in general and, specifically, on the school funding scheme; (2) within the kindergarten to 12th grade education budget, what programs have been placing the biggest demand on the state’s education budget, and what has been these programs’ impact on public schools funding; (3) their satisfaction that the current education finance system provides an adequate education for all students; (4) their satisfaction that the current finance system provides for a more equitable distribution of state education funds; (5) the legal, political, and economic constraints that challenged state policymakers in developing the funding and allocation system and the way policymakers dealt with those constraints in developing the system; (6) the intended financial outcomes of the new school finance system and the extent to which the state has succeeded in achieving these outcomes; (7) the trade-offs that resulted or are anticipated from implementing the new finance system; and (8) advice for states that are trying to revise their school finance system. The Texas Supreme court held in 1989 that the state school financing system, which relied heavily on local property taxes, was unconstitutional. Since 1989, school finance issues in Texas have been dominated by legislative attempts to provide districts greater equity in their ability to raise revenue. The courts rejected two approaches passed by the Texas State Legislature before accepting a third, which has been in place since 1993. Key characteristics of the new approach are (1) a mechanism for equalizing property wealth among districts and (2) revenue limits for all districts through a cap on property tax rates. Included in the approach is transferring part of wealthy districts’ property tax revenue to less wealthy districts. Districts were given a choice of five options for disposing of excess wealth. Most chose to simply write a check to the state. Caps were also placed on property tax rates. Finally, the level of state support to districts was linked to a formula that accounted for the districts’ revenue-raising ability and tax effort. Texas officials interviewed, who included former legislators, state officials, educators, and others involved in the years of legal and political controversy and the results that followed, believed that the new system has achieved greater equity. However, they also cite several concerns that could undermine the state’s efforts to achieve equity, such as taxpayer resistance to the higher property taxes that have resulted. Funding for the 3.6 million students in Texas’s 1,046 school districts totaled $19.5 billion in school year 1993-94. Of this, $17.3 billion was budgeted by local school districts. The largest share of the district-budgeted revenue, 50.4 percent, came from localities. The remainder came from the state (41.6 percent) and the federal government (8 percent). Part of the remaining $2.2 billion was used for items not budgeted by local districts such as textbook purchases and state matching contributions to the teacher retirement fund; the remainder was due to district underbudgeting of the revenue they actually received. Between 1985 and 1995, local funding increased by 117 percent, while state funding increased by 60 percent. At the state level, elementary and secondary education is the largest item in the state budget (about 26 percent in fiscal year 1994). The primary source of state revenue for education is the sales tax; taxes on oil and gas production, corporation franchises, and tobacco and alcohol; lottery proceeds, interest and dividends, funds from the Available School Fund,and other state fees and taxes provide the rest of the education revenue. At the local level, virtually all of the revenue is raised through property taxes. Between 1990 and 1994, total expenditures for elementary and secondary education increased more than 34 percent. Much of this increase has been used to offset the rapid growth in the school-age population and increases in the number of special needs students. The cost of educating special needs students is generally higher than the cost of educating children without special needs. Texas has one of the largest and fastest growing school-age populations in the nation. Between 1990 and 1994, the total number of students increased from 3.3 million to 3.6 million (8.6 percent). During that period, the number of special needs students increased at an even faster pace: students participating in special education increased almost 33 percent; students in bilingual programs increased almost 36 percent; and economically disadvantaged students increased more than 22 percent. In 1984, a group of less wealthy school districts filed a suit (Edgewood v. Kirby) charging that the state’s heavy reliance on property taxes to fund education resulted in expenditure differences that violated the Texas Constitution. The districts argued that the disparity in districts’ property wealth limited the ability of less wealthy districts to raise adequate funds. After a trial in 1987 and appeals through the state court system, the Texas Supreme Court in 1989 ruled that the finance system violated the constitutional provision for an “efficient” system. The court noted that glaring disparities existed in the abilities of less wealthy school districts to raise revenues from property taxes because taxable property wealth varied greatly by district. The wealthiest district had over $14 million of property wealth per pupil while the poorest had about $20,000. Many less wealthy districts were taxing themselves at a much higher rate than wealthy districts but producing far less revenue. As a result, less wealthy districts struggled to raise the revenue needed to fund programs that met the state’s basic education requirements, while wealthy districts were able to pay for a wide array of enrichment programs. The court said, “a direct and close correlation between a district’s tax effort and the educational resources available to it” must exist. The court noted that although districts did not have to spend equal amounts per student, they must have substantially equal access to similar revenues. In response to the Texas Supreme Court decision, in June 1990, the legislature passed Senate Bill (SB) 1, a reform measure that provided more money for equalization but left intact the school finance system. Less wealthy districts appealed, and, in January 1991, the supreme court struck down SB 1, holding that the public school finance system still violated the “efficiency” provision of the Texas Constitution. The court said that, while SB 1 improved the school finance system, it still did not restructure the system to ensure that less wealthy districts had substantially equal access to revenue for similar tax effort. The court suggested the solution of either consolidating school districts or district tax bases. Everyone we interviewed said that consolidating school districts was not a workable option because of negative public reaction. An official who was involved with writing the school funding legislation said, “many citizens see consolidation as a threat to local control and to the identity and economic viability of their community.” Rejecting the idea of consolidating school districts, the legislature created a system that would partially consolidate only the tax bases. This measure, SB 351, was signed into law in April 1991. It created 188 County Education Districts, which were countywide taxing entities encompassing several school districts with a cumulative property wealth no greater than $280,000 per pupil. These districts were to levy state-mandated property taxes and redistribute the revenues to their member districts on an equalized basis. This time the wealthy districts appealed, and, in January 1992, the Texas Supreme Court ruled that SB 351 was unconstitutional because it (1) violated the state constitution provision that prohibits a state property tax and (2) levied a school property tax without local voter approval. The supreme court gave the legislature until June 1993 to create a new school finance system. When the legislature met in regular session in 1993, it submitted for voter approval a constitutional amendment legalizing the major SB 351 provisions. In May 1993, Texas voters defeated the measure by a wide margin. Then, with time running out and still trying to design a system that would not force district consolidation, the legislature passed a new measure, SB 7, in May 1993. In January 1995, the Texas Supreme Court upheld SB 7’s constitutionality. In developing a system that would meet the test of the state supreme court, the legislature chose not to develop a solution that required a further increase in state taxes, according to state officials and education advocates. The legislature had been putting more money into the system to address equity issues since the early 1980s and determined that to address the court’s concerns, it would have to develop a solution that concentrated on redistributing local funding. Complicating the ability to find additional state dollars were rapid increases in expenditures for Medicaid and criminal justice programs. Between 1991 and 1995, state spending for Medicaid and criminal justice programs increased 117 and 135 percent, respectively. The measure passed by the legislature and approved by the supreme court has several key features. It (1) creates greater equality in property wealth among districts, (2) sets limits on local property tax rates, and (3) provides supplemental state funding for less wealthy districts to equalize the revenue received on their local taxes. The new mechanism in SB 7 that sets it apart from the other funding systems rejected by the court is a recapture provision that creates greater equality in property wealth among districts. This provision was enacted because other options for closing the gap in spending between the wealthy and less wealthy districts were limited, according to several people we interviewed. Politically, they said, the state could not pursue consolidation, and, financially, it could not raise the level of poor districts’ spending to that of wealthy districts. Because of the vast disparities in property wealth, the estimated cost of the latter option was four times the amount of the entire state budget. The new provision, which took effect in 1993, required districts with property wealth exceeding $280,000 per pupil to reduce their taxable wealth to no more than that amount. Districts had five options for doing so: (1) consolidating with another district or districts, (2) transferring property to a poor district for taxation purposes, (3) purchasing attendance credits from the state (in effect, writing a check to the state), (4) contracting with another district to educate some of their students (in effect, writing a check to the district), or (5) creating a taxing district by consolidating the tax base with one or more other districts. This provision affected 90 of the state’s 1,046 school districts in school year 1993-94. Collectively, these districts had to reduce their property wealth, using one or more of the options, resulting in the recapture of more than $430 million in local property tax. Of these 90 districts, none used options 1, 2, or 5; 61 used option 3, 22 used option 4, and 7 used a combination of options 3 and 4. SB 7 included a provision that allowed some of these districts to retain a greater amount of taxable wealth for the next several years. The provision, which expires in 1996, permits wealthy districts to maintain their spending at 1992-93 levels and to retain enough property wealth to do so, subject to certain limitations. This provision was included out of concern that rapid reductions could harm student programs, according to a state official. To further limit spending of the wealthy districts, SB 7 capped school property tax rates at $1.50 per $100 of property value for all districts. Districts may call elections to increase the $1.50 limit by local voter authorization to no more than $2.00 for bonds and debt service. Texas distributes state funds for public education through a two-tiered system of formulas known as the Foundation School Program. Tier I, a foundation formula, provides funds for meeting the state’s basic education requirement. All districts are eligible to participate if they levy a property tax of at least $.86 per $100 of property value. Tier II funding is designed to provide additional funds to enrich the basic foundation program and to offset the wealth of wealthy districts. Under the provisions of SB 7, participation in tier II funding is limited to districts with per pupil property wealth below $210,000. These districts can receive tier II funding if they set their property tax rates above the level required for tier I funding— between $.86 cents and the maximum of $1.50 per $100 of property value. Under this arrangement, less wealthy districts willing to tax themselves at higher rates will receive more state aid. Districts with per pupil wealth above $210,000, which can raise more revenue with equal tax effort than their counterpart districts below this level of per pupil wealth are not eligible to receive tier II aid. Districts with per pupil wealth below $210,000 receive tier II aid in direct proportion to the degree to which they are willing to raise their own tax rates. State officials, former legislators, education advocates, and others we interviewed were unanimous in saying that the new system had greatly improved equity. They noted that compromises had to be made to increase the level of funding available to poor districts while not forcing school district consolidation across the state, but they said that the amount of progress towards greater equity had been substantial. For example, when the new system is fully implemented in 1999, the portion of unequalized revenue in the system will have decreased from nearly 21 percent of all state and local revenues in 1989 to less than 2 percent. Our interviewees regarded greater taxpayer equity as a significant outcome of the new system. Under the system in place in 1989, wealthy districts were able to raise large amounts of money at a low tax rate while less wealthy districts—even if they were taxing themselves at a much higher rate—could not raise the funds needed to provide an education program that met basic requirements. They pointed to evidence that the new system’s limitations on revenue raised through the property tax in wealthy districts and rewards to less wealthy districts for revenue raised through tax effort were affecting that disparity. For example, in 1989, more than 69 percent of the disparity among districts in per pupil revenue was due to differences in property values; by 1999, when the system has been fully implemented, almost 77 percent of the disparity will be due to differences in tax effort, according to estimates. While our interviewees cited accomplishments under the new system, they also collectively identified four concerns about inequities in the school finance system reemerging: (1) the continued heavy reliance on local property taxes, (2) wealthy districts’ concerns about sharing their wealth, (3) less wealthy districts’ concerns about continued differences in per pupil spending, and (4) districts’ inability to meet rising costs. The Texas school finance system continues to rely on the local property tax for more than half of its total revenue—and under SB 7, local property taxes have increased. Wealthy districts have had to increase their property tax rates to offset the loss of state aid and maintain spending levels. In addition, many less wealthy districts have also increased their taxes because, under the tier II formula, the state rewards less wealthy districts that raise property taxes by increasing their state aid. Between 1990 and 1994, the effective property tax rate statewide has increased more than 43 percent, from $.96 per $100 of property wealth to $1.38. In 1994, 94 percent of the districts had a total effective tax rate that exceeded $1.00 per $100 of property wealth. Almost all of those interviewed said that the new system had been designed to rely on the local property tax for most of its revenue to limit state government’s costs. However, most officials expressed concerns about the state’s reliance on the local property tax for such a large part of public education and said that they were concerned about a public effort to roll back tax rates in the future. In addition, in the fast-growing districts, funds needed to build schools are competing with funds needed to improve education programs, which decreases educational opportunity, according to an education advocate. Wealthy districts are dissatisfied with this new system and have pressured the legislature to make changes, according to state officials and education advocates. For example, in 1995, the legislature changed the provision that requires wealthy districts to reduce their taxable wealth. Under this change, those districts that write a check to the state (option 3) may reduce their costs through a discount and extend the hold-harmless provision for an additional 2 years. In addition, the state is permitting wealthy districts to retain the money paid for appraising their district’s property when such an appraisal is required to meet the recapture clause provisions. Continued pressure on the legislature to make changes that benefit the wealthy districts contradicts state efforts to improve equity between the wealthy and less wealthy districts. Officials also noted that the new system did not bring something that poor districts had wanted—equality in per pupil spending with wealthy districts. Once the system is fully implemented, in 1999, it will permit wealthy districts to spend about $600 more per weighted pupil than less wealthy districts. The state supreme court indicated in its first ruling that although substantially equal yield for similar tax effort was required to meet the test of efficiency, a per capita distribution or equal spending per pupil was not. Despite the court ruling, however, the expected disparity in spending may be too much, according to some education advocates, given that $600 per pupil is a significant difference (for example, it equals $15,000 per 25-student classroom) that could give students in wealthy districts an advantage in financing educational opportunities. The effect of Texas’ school finance system has been to provide more revenue for less wealthy districts while limiting the amount of revenue available to wealthy districts. Most officials said that they thought that the new system is achieving its goal—greater equity and more revenue for less wealthy districts. However, they also said that they have concerns about the system because of the statutory tax rate ceiling. As more school districts reach the $1.50 ceiling, their ability to increase spending to meet increased costs will be severely limited unless the state provides additional funding or the tax ceiling is raised. This current tax rate could be particularly burdensome for fast-growing districts with large numbers of minority and economically disadvantaged students. At issue is whether these districts will be able to fund the costly services needed to meet students’ needs. For example, in 1994, of the more than 445,000 students in the 123 districts with the lowest per pupil property wealth, 80 percent were minority students, 24 percent participated in bilingual programs, and almost 70 percent were economically disadvantaged. In response to a suit filed by small, rural districts, a state court ruled in 1991 that Tennessee’s school finance system was unconstitutional, leaving it to the legislature to devise a remedy. This ruling came when efforts were already under way in the state to not only revamp the state’s school finance system, but also to reform the management and academic curriculum of the state’s public elementary and secondary schools as well. In a budgetary environment in which health and correction costs were increasing and facing political pressures to incorporate accountability and maintain local control, the Tennessee legislature passed legislation in 1992 to reform the school finance system. Motivated by a potential court-imposed solution, the legislature increased the amount of state funds for education, funded it with a new half-cent sales tax, imposed accountability provisions, and crafted what has been described as a win-win solution that benefited all districts. Legislators, state education officials, school district administrators, and others we interviewed said that the state has improved the educational opportunities of students in poor districts since 1992. Indicators of success included reduced spending disparities among districts, a large share of the funds going to classroom expenditures, and more educational opportunities (richer curriculum). Officials also praised other aspects of the revised school finance system, such as the cost-based approach to funding education and the flexibility districts have to spend their funds. However, officials we interviewed also identified several concerns dealing with a growing antitax sentiment, teacher salary increases, limits on local spending, the educational needs of the urban poor, and accountability. As a result of a 1995 supreme court decision, the state has taken action to finance teacher salary increases. Depending on how some of the other concerns are handled, officials suggested that inequities in the state’s school finance system may recur. Spending for public elementary and secondary education in Tennessee totaled $3.4 billion in fiscal year 1994. State contributions, the largest share, amounted to 49.5 percent of the total, while local governments contributed 40.6 percent, and the federal government contributed 9.9 percent. Most of the state funding for education, including higher education, is earmarked and primarily comes from Tennessee’s 6-percent sales tax, the state’s single largest tax revenue. The state does not levy property tax and collects an income tax only on unearned income such as on stock dividends. In fiscal year 1994, the sales tax amounted to about 65 percent of education funding. Additional sources of state revenue for education include other earmarked revenues, such as taxes on tobacco and mixed drinks, and general fund revenues from a broad range of licenses, fees, and other sources. Property taxes and local sales taxes are the two major sources of local district tax revenues for elementary and secondary education. In school year 1993-94, property taxes accounted for 36.5 percent of local revenues in support of public schools, and local-option sales taxes accounted for 28 percent. The local-option sales tax allows localities to raise the 6-percent sales tax rate by as much as 2.75 percentage points. The state requires at least one-half of the local-option sales tax revenue to go to education. In 1988, the Tennessee Small School Systems, representing 77 of the 139 districts in Tennessee, filed suit in chancery court against the state, claiming that, because of disparities in funding, the state school finance system violated the education clause and the equal protection requirements of the state constitution. Frustrations arising from the state’s slow pace in enacting school finance reforms prompted the lawsuit. Our analysis of school year 1989-90 student data showed that districts involved in the lawsuit tended to be poorer than those who were not involved because the percentage of students eligible to participate in the free or reduced-price lunch program in a plaintiff district averaged about 37 percent compared with about 30 percent in a nonlawsuit district. However, almost 50 percent of Tennessee’s students eligible to participate in the lunch program were in the nine large urban and suburban districts that intervened on the state’s behalf out of concern for losing funding to rural districts. The plaintiff districts asked the court to declare the state’s school finance system unconstitutional, to enjoin the state from acting under the statutory school finance system, and to require the legislature to enact a constitutional school finance system. In September 1991, the chancery court ruled in favor of the small districts, finding that the school finance system violated the equal protection provisions of Tennessee’s constitution but delaying the effective date of its order until June 30, 1992, giving the legislature an opportunity to correct constitutional deficiencies. The state and a group of urban and suburban district intervenors appealed the 1991 chancery court ruling to the court of appeals, which reversed the lower court order; the case was then appealed to the state supreme court by the Tennessee Small School Systems. In 1993, the state supreme court ruled that Tennessee’s school finance system was unconstitutional. The court found that the state failed to show a legitimate state interest justifying granting to some citizens educational opportunities that are denied to others and, thus, held that the school finance system violated the constitutional guarantee of equal protection. The court found that the state equalized (that is, distributed funds on the basis of educational costs and ability to pay) less than $60 million of its $2.5 billion education expenditures and that none of the funds raised by the local-option sales tax were equalized. As a result, substantial disparity existed in the revenues available to the different school districts. In 1987, the disparity in total current funds available per pupil showed that some districts had more than twice as much as others—$1,823 per pupil compared with $3,669. The court found that the disparity was due to the state’s reliance on local governments to fund education and local governments’ varying ability to raise sufficient revenues and not necessarily due to an inadequate effort by localities to tax themselves. School districts with more retail activity and higher property values and commercial development had more funds to educate their children than districts with less retail activity and lower property values. As one official put it, “Not every county has a Wal-Mart.” The disparity in funding, the court concluded, led to students in the plaintiff schools not having equal access to adequate educational opportunities, such as laboratory facilities; computers; current textbooks; buildings; and music, art, and foreign language courses, some of which were required by the state. Further, plaintiff schools had difficulty retaining teachers, funding needed administrators, and providing sufficient physical education and other programs. The court linked inadequate funding of the plaintiff schools to their educational outcomes. The court noted that, in the 10 wealthiest districts for the 1988-89 school year, 66 percent of the elementary schools and 77 percent of the secondary schools were accredited by the Southern Association of Colleges and Schools compared with 7 percent and 40 percent, respectively, in the 10 poorest districts. The court observed that graduates from accredited high schools have better success in college acceptances. Students in the plaintiff schools, however, had poor standardized test results and more need for remedial courses in college. While the case was pending in chancery court and well before the 1993 supreme court decision, legislative leaders began to develop a remedy, building on the work of an ad hoc committee of statewide education officials formed by the Tennessee State Board of Education and making refinements as developed by officials in the Governor’s administration. The committee had been analyzing Tennessee’s school finance system since 1986, reported a State Board of Education official, and, by 1991, had developed and refined recommendations for a system. This system, termed the Basic Education Program, funded local school districts on the basis of the cost of providing a basic education and determined the local share of this cost on the basis of a locality’s fiscal capacity. Assuming the state was responsible for funding about two-thirds of the total cost of the program, the governor’s administration estimated, according to an Assistant Commissioner for Tennessee’s Department of Education, that the additional state funds required would be $665 million, to be phased in over 6 years. According to a budget coordinator in the state’s Department of Finance and Administration, the call for increased spending on education came when the state was facing budget pressures from Medicaid and corrections. Enrollment growth was increasing Medicaid expenditures, and a prison construction program was fueling corrections expenditures, reported the budget coordinator. In our analysis of state budget data, we found that Medicaid expenditures increased by 103.6 percent from fiscal year 1988-89 to fiscal year 1991-92, while corrections expenditures increased by 19.8 percent during this period. Further, the budget coordinator said that demands to equalize the state’s school finance system posed the biggest education-related pressure on the budget for this period. However, he also said that costs associated with serving special education students—who were increasing at a faster rate than the general student population—also placed some pressure on the budget. In support of his opinion, we found that while Tennessee’s net enrollment increased by 3.9 percent from 860,101 in school year 1987-88 to 893,272 in school year 1991-92, the number of special education students increased by 18.4 percent—from 129,725 to 153,634 over this same period. In developing the specific legislation for the Basic Education Program, legislators faced at least two major challenges: first, how to raise the estimated $665 million needed in additional revenue to fully fund the program (level up the financing) and, second, how to equitably distribute funds among districts. In accomplishing these tasks, the legislature faced legal and political pressures. The chancery court ruling and possible further court action motivated the Tennessee State Legislature to pass the Education Improvement Act, which established the Basic Education Program and funded it using the formula developed by the State Board of Education. The act also restructured the management of schools, set new academic standards, and mandated a new accountability system. The legislature then passed a half-cent sales tax increase to finance the act. Regarding increased revenues, officials we interviewed, including key legislators at the time the new laws were passed, talked about the difficulty in convincing other legislators and members of the public that an increase in education spending was needed. Among the opposition were certain private education groups that believed any increase in funding for public education was a waste and members of the business community who opposed a sales tax increase with no guarantee that the funds would improve education. Using an income tax to finance the plan proved not to be a viable option because the Tennessee governor tried and failed, in a special session, to pass what would have been the state’s first broadly based income tax. Legislators and state education and other officials involved in crafting the legislation said that the inclusion of accountability provisions and the half-cent sales tax law were essential to passing the Basic Education Program. They cited key financial accountability provisions such as earmarking the revenue raised by the half-cent sales tax for education and specifying that revenues could not be spent on increases in teacher salaries—an expenditure many legislators believed was not linked to improvements in learning. They said that despite some opposition from teachers, school administrators, and some legislators, it was also important to include programmatic accountability provisions which, among other things, made local school officials accountable to the state for school district performance. For example, the law gave the Commissioner of Education power to remove from office local school board members or superintendents of districts that failed by school year 1994-95 to meet performance goals in such areas as student attendance rates and test scores. To facilitate this accountability, the law also required all districts to elect local school board members who in turn would be responsible for appointing a district superintendent. Districts previously had used a variety of methods to elect or appoint their local school officials. Another challenge entailed devising an equitable allocation system, given that the state would not be at full funding for 6 years. The chair of the House Education Committee said that it was important to show legislative members that, with the increase in state spending, all districts, including large urban districts, were better off than they would have been under the old finance system. According to a state education official, the solution entailed adopting a wage adjustment factor developed by the state Department of Finance and Administration to ensure that the large, urban districts received more money under the new system compared with the old system. Then, because the money had to be phased in, the legislature chose to distribute the new money according to how close a district was to its full funding level. The farther away a district was from full funding, the more new money it proportionately received compared with a district that was closer to its full funding level. During our visit, Tennessee was in its third year of phasing in the Basic Education Program and was funding 88.1 percent of the fully funded level, $1.9 billion, for fiscal year 1994-95. Given this funding history, benefits of the program cited most frequently by officials we interviewed included improved equity in the funding of poor districts compared with wealthy, greater educational opportunities for all—and, particularly, poor—districts, more flexibility in spending decisions at the local level, and the introduction of a cost-based finance system. District funding and expenditure data obtained from the state also indicate that inequities lessened since passage of the new finance program. Tennessee’s Department of Education reports that the new school finance system equalizes 94 percent of state funding to districts in support of elementary and secondary education in fiscal year 1994-95 compared with 62 percent in fiscal year 1991-92 under the old system. The increase in the amount of equalized funds has reduced the disparity in spending: the Tennessee Advisory Commission on Intergovernmental Relations reports that the disparity in per pupil revenues as measured by how much larger the per pupil revenue in the district at the 95th percentile is than the per pupil revenue in the district at the 5th percentile has declined from 83.9 percent before the enactment of the Education Improvement Act in 1992 to 73.6 percent after 1 year of Basic Education Program funding. The commission estimated that the disparity will fall to 36.1 percent by 1997, the year the Basic Education Program is fully funded. We found that plaintiff districts benefited to a greater extent than the other districts under the new financing program. In our analysis of Tennessee school district funding data, we found that, the average total funding per pupil in the 77 small, rural plaintiff districts has increased. In the fiscal year before the 1992 implementation of the new finance program, the average total funding per student was $2,476 in these districts. In fiscal year 1994-95, the average was $3,254, an increase of about 32 percent. The remaining school districts experienced a 23.10-percent increase in their average total funding per pupil over that same period, with the average per pupil funding increasing from $3,117 to $3,782. The Tennessee Department of Education has been keeping track of how the additional funding made available under the Basic Education Program has been spent. In a 1994-95 budget report that accounts for about $275 million of the new funding under the program, classroom expenditures represented about 80 percent of the new money. Classroom expenditures include the hiring of new personnel, such as teachers, counselors, librarians, and principals, and the purchasing of supplies such as textbooks, instructional materials, and technologies. Within this expenditure category, the largest single portion (about $100 million) was spent on reducing class sizes in kindergarten through eighth grade by hiring 2,999 new teachers. Nonclassroom expenditures accounted for the remaining 20 percent of the new money, with about $25 million of the amount going for the construction and renovation of classrooms. Nearly all the officials we interviewed indicated that the impact of new money on small, rural districts has been significant. A former legislator said that an estimated 70 to 80 small, rural districts can now provide educational opportunities to their students that they could not offer before. For example, he said that schools for the first time have art and music teachers and can offer courses that will better prepare their students for college. These improvements, in turn, will help enable such schools to receive accreditation from the Southern Association of Colleges and Schools. The Tennessee Department of Education reported that the use of state education funds has much more flexibility under the Basic Education Program formula than it had under previous state funding programs. Officials we interviewed, including two legislators, a small district superintendent, and a state school board association official, agreed with the Department’s assessment, citing flexibility in decisionmaking as a benefit. The funding formula groups components of basic education into classroom and nonclassroom categories. The formula’s only earmark is on classroom funds, which must be spent on classroom components but not necessarily on the specific classroom cost component that generated the funds. As an example of the flexibility possible in local spending decisions, a superintendent said that district officials may receive funding to hire a nurse but instead may decide to use the funds to hire a teacher or purchase technology or some other classroom-related item. The old formula based its funding levels on appropriated amounts; the new formula links funding to the costs of 42 critical components, such as teachers and textbooks, associated with providing a basic education. The Department of Education annually reviews and updates the components’ costs. One or both of these features—the inclusion of a wide array of cost components and the annual review—were among the benefits cited by officials we interviewed, including a Tennessee State Education Board official, the Chair of the House Education Committee, and representatives of the state teacher union and school board association. As the State Education Board official explained, the state now has a mechanism for automatically increasing funding to meet increasing enrollment and cost. He explained that, previously, funding for items, such as transportation, had remained the same for a 6- to 8-year period, despite an approximate 33-percent increase in the school enrollment. Although acknowledging improvements, officials we interviewed also identified several concerns about the new school finance system. The five concerns most frequently cited involved (1) the growing antitax sentiment among property owners, (2) the exclusion of teacher salary increases from the finance plan; (3) the lack of limits on local contributions; (4) the high educational costs of poor, urban students; and (5) the new accountability system. Because of a lawsuit, the state has since modified its finance system to include increases in teacher salaries. Depending on how concerns about the antitax sentiment, the lack of limits on local spending, and the needs of the urban poor are handled, some officials indicated that inequities in the school finance system could recur. Given that localities are required to pay a prorated share toward the cost of their district’s basic education, officials we interviewed questioned the willingness of localities to raise their taxes to keep pace with the cost of components, such as teacher salaries, that are expected to increase over time. As a state school board official explained, the Education Improvement Act required schools to comply with reduced class sizes 4 years from the date of full funding for the new school finance plan. This provision, he said, will create significant upward pressure on localities’ contributions, and counties will be dismayed when they learn how much they will need to contribute to earn the state share. A state board of education official said that, as a result, constituents would pressure their state representatives to change the formula to avoid raising taxes to finance their local contribution. If the local share were to be reduced, the official said the state may find it more difficult to ensure the adequate and equitable financing of education needs in all districts. The Tennessee Small School Systems representing the 77 small, rural districts who had filed the original 1988 lawsuit brought suit challenging the provision in the Education Improvement Act that new funding under the Basic Education Program could not be used to increase salaries of existing teachers. The plaintiffs contended that the new funding scheme was unconstitutional because equalization would occur over several years and the plan included no provision to equalize or increase teacher salaries. In its February 1995 opinion, the state supreme court upheld the plan’s constitutionality, accepting the state’s argument that complete equalization of funding can best be accomplished incrementally but found that the plan’s failure to provide for the equalization of teachers’ salaries was a significant defect which, if not corrected, could put the entire plan at risk. The court stated, “Teachers, obviously, are the most important component of any education plan or system, and compensation is, at least, a significant factor determining a teacher’s place of employment.” Underscoring the importance of the program’s key provisions for funding and governance, the court approved Tennessee’s Basic Education Program. Since the court order, the 1995 Tennessee State Legislature has appropriated $7 million of the estimated $12 million needed to equalize increases in teacher salaries. Although the Education Improvement Act mandates a minimum local contribution, it does not limit the amount of the contribution. A superintendent of one of the plaintiff districts stated, “To impose caps to limit local taxing authority in the name of equality or uniformity of education has no place in a new system.” Other officials—including a state attorney and a state legislator, however, suggested that some districts may be very willing to contribute to their local schools, and over time disparities between rich and poor districts may again grow to some unacceptable level. Officials, including two state education officials and a state attorney, indicated that the Basic Education Program formula may not adequately address the needs of the urban poor who reside in counties with high fiscal capacities. An official with the Tennessee Advisory Commission on Intergovernmental Relations said that the formula makes no allowances for the likely higher-than-average unit costs associated with serving the educational needs of a large, dense population of students—many of whom can be characterized as poor or at risk—in the urban districts. Compounding the problem of not receiving perhaps adequate funding for their students’ educational needs, he also said that urban districts have relatively high fiscal capacities and therefore their state funding increases have been proportionately smaller compared with the increases in districts with lower fiscal capacities, which are typically more rural. Finally, the state does not have any requirement for counties to “weight” the distribution of state or local funds to their school districts according to the district’s share of poor or at-risk students in the county. Officials we interviewed expressed concern about the reaction of groups affected by certain accountability provisions in the Education Improvement Act. For example, the Chair of the House Education Committee said that members of the legislature have made repeated attempts to repeal provisions related to the election of school board members and the board appointment of district superintendents. The provisions in Tennessee’s pioneering new approach for measuring gains in student performance—the Value Added Assessment System—also sparked controversy. The new assessment system measures gains in student performance in grades three to eight and compares them with national average gains in those grades over the most current 3-year period. Observations made by officials, including two state education officials, a legislator, a teacher union representative, and a district superintendent indicated that the new system is problematic—although a key legislator said he believed the problems can be worked out. Problems cited by respondents in our group included (1) difficulties in holding schools accountable for achieving certain performance goals before the Basic Education Program is fully funded; (2) premature student testing—that is, conducting tests before students have had an opportunity to learn the material; and (3) bad publicity caused by early results that only showed small increases in better schools. In 1988, 52 school districts with below-average property wealth sued the state of Minnesota for providing—in alleged violation of the state constitution—unequal access to education revenue and unequal education opportunities. Unlike some other states, about 90 percent of Minnesota’s education revenues were already subject to wealth- and need-based equalizing formulas. At issue was about 7 percent of general education revenue for elementary and secondary education. Although the Minnesota Supreme Court ruled in 1993 that the finance system was constitutional, the Minnesota State Legislature moved forward with plans to equalize some of the remaining funding. To do so, legislators had to balance three competing interests: (1) increasing funds available to less wealthy districts, (2) dealing with growing pressures for tax relief from business and other property owners, and (3) assuaging the concerns of high-spending districts that they might lose revenue because of changes to the system. The legislature’s revisions have improved the fairness of the system, according to almost all the officials and education advocates we interviewed. However, issues that have emerged since the revisions—districts’ ability to pass tax levies in an antitax environment and the rapidly growing costs of educating children with special needs—have created additional problems that may raise new concerns about finance equity. We determined that in fiscal year 1995-96, public school districts in Minnesota were projected to receive more than $5.5 billion from federal, state, and local sources. More than half of that was provided by the state, and about 44 percent was provided by localities. The federal government contributed the rest. Since fiscal year 1986, Minnesota has spent more on primary and secondary education than it has on any other single major state program, amounting to about one-third of most recent state expenditures. Since 1983, the state’s share of total district revenue, relative to local and federal contributions, has dropped below 50 percent only once and has been as high as 55 percent. State appropriations for public schools are funded primarily by statewide income and sales taxes. The local contribution is funded primarily with local property taxes. Understanding how Minnesota funds education helps to better explain the pursuit of revenue equity in the state and how it differs from such pursuits in states where much wider disparities existed. Elementary and secondary schools receive the bulk of their general operating funds and levy authority from the state through the General Education Revenue Program, a program subject to wealth equalizing formulas. Nearly three-fourths of the total state funding for elementary and secondary education is distributed to school districts through this program. The remaining one-fourth of the state’s appropriation is for special-purpose or categorical aids, some of which also are wealth and cost based. The General Education Revenue Program entitles each district to a specified revenue allowance per pupil, with additional allowances allocated on the basis of economic, geographic, and other cost-related circumstances of the district. The “basic revenue allowance,” without the cost-related adjustments, was $3,150 per pupil unit in fiscal year 1995-96. The state pays the district the difference between what a district can raise at a statewide tax rate and its basic revenue allowance. The proportion of general education aid received by each district depends on the district’s relative property wealth per pupil. A few very wealthy districts receive no general education aid, while relatively poor districts receive most of their general education revenue as state aid payments. In addition, districts may also supplement this basic funding by implementing a voter-approved operations tax levy. Revenue raised by voter-approved operations levies constituted about 6 percent of total district revenue in 1994. The optional operations tax levy was at the heart of the dispute about funding equity. As in Texas and Tennessee, a lawsuit prompted action to revise Minnesota’s finance system. The suit was brought in 1988 by rapidly growing school districts whose property wealth per pupil had dropped below the state average. The suit alleged that the education finance system violated two provisions of the state constitution: its education clause (which requires a “general and uniform system of public schools”) and its equal protection clause (which provides that a citizen may not be deprived of any right or privilege). This challenge was to a relatively small portion of Minnesota’s funding structure because more than 93 percent of Minnesota’s general education revenue already fell under wealth-and cost-based funding schemes. Plaintiffs contended that the failure to equalize the remaining 7 percent of this revenue left too much discretion with local officials and permitted wealthy districts to generate much more additional funding than low- or average-wealth districts. The lawsuit challenged three types of state funding programs: (1) voter-approved operations tax levy; (2) revenue guarantees typically benefiting high-spending, wealthy districts; and (3) local debt service levy approved by voters to finance bonds for school construction and renovation. The court found that the constitutional requirement for a “general and uniform system” of public schools does not mandate complete funding equalization and that any inequities that existed did not actually violate the constitution’s education clause. Nevertheless, the legislature continued to implement revisions it had begun in 1991 to further reduce the disparities between wealthy and less wealthy districts. To improve fiscal equity in Minnesota, legislators said they found they had to balance three competing interests: (1) increasing funds available to low-property wealth districts, (2) dealing with growing pressures for tax relief from business and other property owners, and (3) assuaging the concerns of high-spending districts that they might lose revenue because of changes to the system. Unlike other states, where legislatures made significant structural changes to the school finance system to affect equity, the Minnesota State Legislature made minor changes within the existing system, education leaders and state officials said. Legislative revisions principally focused on voter-approved operations and debt service property tax levies. Beginning in 1992, state aid was provided to equalize a portion of the optional operations tax levies. It also began debt service equalizing aid that year. The state’s tactic to improve equity in the levy programs was to calculate an aid contribution per district in the same manner as it calculated the state’s share of the district’s General Education Revenue. Equalizing aid was provided to districts passing levies in a proportion similar to that received by them under the equalized General Education Revenue program. State aid was provided to equalize a portion of the optional levy. Initially, only $305 per pupil unit raised by the voter-approved operations tax levy was subject to this equalizing scheme. The debt service levy equalization aid was phased in over 3 years, at a lower rate of equalization than operations levies, and was made available only to districts whose debt service exceeded 10 percent of their taxable base per pupil. The need to provide additional revenue for some districts was complicated by other pressures on the state budget, according to one former member of the education finance committee in the Minnesota House of Representatives. When these changes were being considered, the state also had growing health care costs due to expanded coverage and enrollment growth and growth in corrections costs due to harsher sentences and improved law enforcement. We found that health care spending increased over 47 percent between fiscal years 1988 and 1991. Corrections spending was up almost 42 percent in the same period. State legislative officials and education advocates said that education finance negotiations included pressures from business interests and tax reformers who wanted to make property tax relief part of education finance reform. Of interest to business was the state’s property tax system, which taxed different types of property used for different purposes at significantly different rates. The lowest tax rates were applied to agricultural property. The highest rates were applied to commercial property. Business leaders, whose commercial property could be taxed at more than four times the lowest residential homestead property rate, wanted changes. In addition, one group sought to cap statewide property tax rates for operations levies. State law did not limit tax rates for optional levies. To deal with these concerns, legislators made several changes. Initially, these changes included the following: They voted to terminate all operations levy authority by July 1, 1997, forcing districts to go back to the voters for any renewal. In addition, they limited all operations levies to 5-year terms. Specifically to respond to business concerns about tax fairness, the legislature began to phase in a market value-based property tax system for optional operations levies. After 1997, commercial property was to no longer be taxed at a higher rate than homestead and other property. Finally, the legislature capped the amount of optional operations levy revenue that a district could raise. In effect, this limited districts’ property tax rates for optional operations levies, which had not previously been limited by state law. Initially, the revenue limit was set at 35 percent of the basic revenue allowance but was further reduced to 25 percent in 1993. Members of the legislative committee addressing education finance issues said some high-spending districts were concerned that revisions to the finance system might reduce their revenue, create fiscal hardship, and harm the quality of their education programs. These districts sought revenue protections and were supported by their legislators. For example, most districts’ optional operations levy revenue was below the statutory limit, though a few districts exceeded the limit, legislative officials said. Districts located in sparsely populated areas were not subject to revenue limits. In addition, suburban metropolitan districts experiencing declining enrollment were allowed to retain their excess revenue. Additional protective measures that the legislature passed were to postpone the statutory expiration date for optional operations levies from 1997 to 2000, and to extend the 5-year term limit on newly passed levies to 10 years. Districts have options to extend these levies further if they convert their levies to a market-value base. The legislature also continued a state revenue program specifically challenged as unfair by plaintiff districts in their equity suit against the state. This program guarantees districts revenue that might otherwise have been reduced due to changes in the school finance system. One legislative fiscal analyst said that it is unlikely this program will be eliminated, although funding is being reduced, because a handful of districts greatly depend on these dollars. Legislative and education officials and education advocates generally agreed that equalizing optional property tax levies had made the finance system fairer. However, education leaders in particular said that several additional problems have emerged since the revisions and these problems remain unresolved. Legislative and education officials and education advocates generally agreed that the new equalizing levy aid program has moved the state closer to a finance system in which equal tax effort generates equal revenue per pupil unit among districts. For example, we found that the differences in relative tax burden between the state’s highest property wealth and lowest property wealth districts has diminished since the 52 school districts filed suit in 1988. According to a document prepared by plaintiff districts, the tax rate for the poorest districts was on average almost one-fourth higher than that for the wealthiest districts. By 1992 and 1993, however, we determined, on the basis of State Department of Education data, that this disparity had diminished to just over 6 percent. Furthermore, we also determined that in 1988 a less wealthy district falling at the 10th percentile generated expenditures of $59.57 per pupil for every percent of tax levied, compared with $84.98 per pupil for every percent of tax levied in a wealthy district at the 90th percentile. By 1992 and 1993, however, the per pupil expenditures for every percent of tax levied were essentially equal in two districts falling at the 10th and near the 90th percentiles in wealth. A less wealthy district generated $129.58 in expenditures per pupil for every percent of tax levied compared with $132.25 per pupil in a wealthy district. Differences in spending, which the state supreme court found were largely justified by differences in operating costs and pupil needs, have not changed significantly since 1988. In 1992-93, the district at the 95th percentile in per pupil operating expenditures was spending 54 percent more than the district of the 5th percentile. The disparity surpassed that in 1988, when the difference was 48 percent per pupil. The legislature has required, however, that if the spending gap between the districts at the 5th and 95th percentiles in general education revenue begins to increase significantly, the state Department of Education will devise a plan to reduce that growth and recommend that plan to the legislature. State education leaders pointed out two problems that have emerged since equalizing legislation was introduced for optional levies. These problems, which concern public support for taxes and growing education costs, may affect state efforts to achieve improved finance equity. First, voter resistance to taxes is growing. As a result, fewer districts can pass optional levies, officials said. For example, voters in one such district near St. Paul twice rejected the renewal of an existing optional operations levy in 1994. The loss of about $350 per pupil unit caused the district to lay off about 70 teachers in 1995, the superintendent said. Voter discontent with taxes overall and the property tax in particular affected the outcome of the referendum, he said. According to one state finance official, no real growth has occurred in the proportion of districts statewide passing operations levies. This has occurred while the state has increased equalizing aid for districts passing levies. The ability of some districts to pass levies, when other districts cannot, will affect spending disparities in the state, education advocates said. The second emerging problem is that the number of special needs students requiring costly special education programs is creating fiscal pressures statewide, state officials and education advocates said. While state appropriations for the basic revenue allowance have increased only 10 percent between 1991 and 1995, aid for remedial education, special education, and limited English proficiency are up 75 percent, 103 percent, and 37 percent, respectively. Several people we interviewed said that, although the state has increased aid for special needs children, aid has not kept up with the costs of educating them. The director of the organization representing the 52 school districts that brought the 1988 suit said that some districts have been hit harder by this growing population than others. For example, we determined that the Osseo School District in school year 1987-88 was spending $277 per pupil on special education services, but by 1992-93, its special education spending had almost tripled to $766 per pupil. Another district struggling with increasing costs has been the Minneapolis School District, where a large proportion of the children enrolled are from economically disadvantaged backgrounds and are of ethnic minorities. During our visit, this district was considering another lawsuit against the state. Even though the district has among the highest tax bases in the state and spends more per pupil than 95 percent of the state’s districts, it is not enough, a policy official with the district said. Academic achievement by minority students has been below national averages. Education advocates pointed out that state aid to districts reflects the amount of money the state has available to fund education, rather than the cost of providing it. Although the legislature established a commission in 1993 called the Coalition for Education Reform and Accountability to both define and estimate the cost of ensuring a “basic education,” the legislature has not acted on the coalition’s assessment and recommendations. The legislature did not renew the coalition’s funding in 1995. Legislative and education officials said the coalition defined “basic education” very broadly and indicated that a significant increase in education spending by the state would be required. Given existing budgetary constraints, it is unlikely that the coalition’s recommendations will be implemented, state finance officials said. D. Catherine Baltzell, Supervisory Methodologist, provided valuable technical advice on study design; Nancy Purvine, Evaluator, led the case study of Texas and coauthored the report; Virginia Vanderlinde, Evaluator, led the case study of Minnesota and coauthored the report; Mary Reich, Attorney-Adviser, and Dayna Shah, Assistant General Counsel, provided legal advice; and Stanley H. Stenersen, Reports Analyst, facilitated the organization and writing. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the experiences of three states that reformed their school finance systems, focusing on the: (1) reforms made to each school finance system; (2) legal, budgetary, and political pressures that state legislatures faced in making the reforms; and (3) impact of the legislative remedies in addressing educational funding disparities. GAO found that: (1) lawsuits prompted each state to address education funding disparities among school districts; (2) legislative solutions in all three states helped poor districts without harming the educational programs of wealthy districts and were sensitive to public sentiments concerning taxes; and (3) states undergoing similar education finance reforms should define the equity goals of their school finance systems in terms of the funding needed to achieve a certain level of student performance, link funding reform with greater accountability for student performance, and encourage all groups affected by education finance reform to participate in the decisionmaking process.
The 10 federal financial regulatory agencies in our review vary in size, mission, funding structure, whether they bargain with a union, and how long they have been implementing aspects of performance-based pay systems. For example, FHFB is the smallest agency with just over 120 employees, while FDIC, the largest agency, had more than 4,300 employees as of September 2006 and has been implementing pay for performance since 1998. Likewise, these agencies regulate a range of activities including banking and securities and futures. Appendix II includes the financial regulators’ missions, funding structures, and whether they are unionized and bargain with a union over pay and benefits. Under Title 5 of the U.S. Code, the financial regulatory agencies have the flexibility to establish their own compensation programs without regard to various statutory provisions on classification and pay for executive branch agencies. At the same time these financial regulators received increased flexibility regarding compensation, Congress also generally required that they seek compensation comparability with each other. A provision in FIRREA requires six agencies—FDIC, OCC, NCUA, FHFB, FCA, and OTS— in establishing and adjusting compensation and benefits, to inform each other and Congress of such compensation and benefits, and to seek to maintain comparability regarding compensation and benefits. Additional FIRREA provisions require FCA, FHFB, NCUA, OCC, and OTS to seek to maintain compensation and benefit comparability with the Federal Reserve Board. Although the Federal Reserve Board is under no obligation to seek to maintain compensation or benefit comparability with these or any of the other financial regulators, it has agreed to share compensation information with the other financial regulators. The other three agencies are subject to their own compensation comparability provisions. As required by its 1992 enabling legislation, OFHEO must maintain comparability with the compensation of employees from the Federal Reserve Board, OCC, FDIC, and OTS and consult with those agencies in that regard. In 2002 legislation, SEC and CFTC were placed under comparability requirements. SEC must consult with and seek to maintain compensation comparability with FDIC, OCC, NCUA, FHFB, FCA, OTS and CFTC. However, as shown in table 1, this legislation did not require these agencies to seek to maintain compensation comparability with SEC. Similarly, CFTC must consult and seek to maintain compensation comparability with the six FIRREA agencies, but those agencies are not required to seek to maintain compensation comparability with CFTC. We previously identified key practices for effective performance management based on public sector organizations’ experiences both here and abroad. High-performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. Performance-based systems reward employees according to their performance by using performance ratings as the basis for pay increases. Linking pay to performance can help to create a performance-oriented culture by providing monetary incentives to become a top- performing employee. At the same time, as a precondition to linking pay to performance, performance management systems need to provide adequate safeguards to ensure fairness and guard against abuse. Providing adequate safeguards that help to ensure transparency can improve the credibility of the performance-based pay system by promoting fairness and trust. Safeguards can include establishing clear criteria for making rating decisions and determining merit increases, and providing overall results of performance rating and pay increase decisions to all employees, while protecting confidentiality. Effective performance management systems also make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. As we have previously reported, effective performance management systems can provide management with the objective and fact- based information it needs to reward top performance and provide the necessary information and documentation to deal with poor performers. Overall, the federal financial regulators have implemented key practices for effective performance management systems in ways that consider the unique needs of their organizational cultures and structures, but some have not fully implemented certain practices. For purposes of this section, we focus on the regulators’ implementation of the two key practices of (1) linking pay to performance (which includes building in safeguards), and (2) making meaningful distinctions in performance. First, we found that while the regulators generally linked pay to performance, two regulators awarded across-the-board increases to employees regardless of their performance. Second, while most regulators generally used safeguards in varying ways to increase transparency, one did not establish and communicate performance standards to its nonexecutives, which resulted in questions about how decisions were made and could compromise the credibility of the performance system. Third, many regulators did not fully implement the safeguard of providing overall ratings and pay results to all employees, which reduced the transparency of their performance-based pay systems. Fourth, we found that while most regulators used multiple rating levels to make meaningful distinctions in performance, employees were usually concentrated in one or two rating categories and all had very few poor performers. Finally, one agency did not complete performance ratings for senior officers due to lack of funding for pay increases, thereby missing an opportunity to provide valuable feedback. For information about the other four key practices as well as additional material pertaining to the linking pay to performance practice, see appendix III. All of the regulators awarded some performance-based increases during the appraisal cycles we reviewed that were linked to employees’ performance ratings, although two financial regulators also provided annual pay adjustments to employees, regardless of performance, during the appraisal cycles we reviewed. Specifically, CFTC provided an across- the-board pay increase to all employees to be equivalent to the cost of living adjustment received by General Schedule employees of the federal government in January 2006. During the 2005 appraisal cycle, SEC also provided all employees an across-the-board pay adjustment of 2.1 percent, regardless of their performance. SEC officials noted that this across-the- board pay adjustment was in accordance with the negotiated compensation agreement with the union. While the percentages of employees rated as unsuccessful or unacceptable at CFTC and SEC during those cycles were extremely small (less than 1 percent), these agencies lost opportunities to reinforce the linkage of pay to performance in their performance management systems. CFTC officials told us that the performance-based pay portion of the new performance management system that will begin on July 1, 2007, will require a minimum threshold performance rating for an employee to be eligible for a pay increase. SEC and its union are currently negotiating implementation of a new Compensation and Benefits Agreement, which provides that employees rated as unacceptable will not receive annual pay adjustments. SEC officials acknowledged that a negative perception occurs when employees who are not performing satisfactorily receive a pay increase. Most of the financial regulators used their rating systems to differentiate individual performance to award performance-based increases and reward top performers during the appraisal cycles we reviewed. Furthermore, all of the agencies also provided increases that, while not directly linked to performance ratings, considered employee performance in some way. These increases included special bonuses or awards given to individuals or teams for special accomplishments or contributions, as well as promotions and within-pay-band increases. For example, FCA provided Achievement or Special Act Awards to employees for significant achievements or innovations towards a special program, project, or assignment that contributed to the agency’s or organizational unit’s mission, goals, and objectives. To receive these awards, employees had to have performed their regular duties at least at a fully successful level of performance. FCA also provided some pay increases for competitive and noncompetitive promotions during the completed appraisal cycle we reviewed. Pay increases linked to performance ratings accounted for only part of the total increases awarded to individual employees during the appraisal cycles we reviewed. See appendix III for more information on the different ways in which the regulators translated performance ratings into pay increases and budgeted for performance-based increases, as well as more information on other pay increases that involved considerations of performance. All of the financial regulatory agencies have built safeguards into their performance management systems to enhance the credibility and fairness of their systems, although they varied in how safeguards have been implemented. For example, with the exception of SEC, the agencies have used the safeguard of establishing and communicating (1) standards for differentiating among performance rating categories and (2) criteria for performance-based pay decisions, thus enhancing transparency, which can improve employee confidence in the performance management system. (See app. III for information on the financial regulators’ implementation of additional safeguards.) CFTC’s four-level rating system (i.e., unsuccessful, successful, highly successful, and exemplary) defined the successful level of performance for areas that CFTC had identified as critical to employees’ job performance, and included some information on how to distinguish variations from the successful level of performance. However, an employee representative at CFTC maintained that the rating level descriptions did not sufficiently communicate to employees the skills and behaviors employees needed to demonstrate in order to move, for example, from the “successful” to the “highly successful” level. Employee representatives stated that even though there was helpful guidance on distinguishing between levels of performance in a CFTC manual, these descriptions were hard to understand and most employees did not refer to the CFTC manual for guidance. An agency official told us that the revised performance management system that went into effect in October 2006 is a five-level system, and includes descriptions of all five performance levels rather than only the successful rating level described in the system it replaced. Similarly, OFHEO defined how employees would be rated on its five-level rating scale for each of the performance elements included in their performance plans. These performance standards defined the middle level of performance (fully successful), and included what the rater should look at to determine if an employee is performing better or worse than this benchmark. An employee’s performance for each element was assessed and a total score was determined. OFHEO further distinguished between “high” and “low” levels within rating categories. For example, a rating of “outstanding” would be classified as being in either the high or low level of the outstanding rating category based on the performance score the employee received. Merit increases at OFHEO have been determined directly by employees’ performance ratings, so employees could ascertain the merit increases they would receive for given performance ratings. For example, an employee rated “high” commendable receives a higher merit pay increase than one who is rated “low” commendable. OFHEO employee working group members noted that both supervisors and employees understand how the performance elements and standards have been applied through rating decisions, and they stated that employees generally understood what was expected of them to attain higher levels of performance and associated merit increases. However, an employee working group member also commented that when distinguishing between performance rating levels, some managers seemed to apply the performance standards more effectively than others, which could result in differences in how rating decisions were made. At FDIC, for nonexecutive/nonmanager employees to be eligible for performance-based pay increases, employees had to first earn a “meets expectations” rating. Then, in a second process called the “Pay for Performance” system, FDIC nonexecutive/nonmanagers were placed into one of four pay groups, based on an assessment of total performance and corporate contributions as compared with other employees in the same pay pool. The pay for performance program was essentially comparative, meaning that the contributions and performance of each employee were evaluated and rewarded on a relative basis within his or her pay pool, as compared to peers. According to union representatives, employees were not informed about how management made the distinctions in pay increase groupings. According to FDIC officials, there are no definitive descriptions or definitions of the performance levels for each of the three pay groups because employees are assessed compared to each other, not against fixed standards. Officials also said that information on the system for determining pay groups was provided to all employees in early 2006 after the compensation and benefits agreement became effective, when the system was first rolled out, and is explained to new employees at orientation. We did not determine how widespread the concern about how management made distinctions in pay increase groupings was among FDIC employees. In contrast, SEC officials did not establish standards upon which to base rating decisions for nonexecutive employees, nor did they communicate criteria used to make performance-based pay decisions to these employees. For its nonexecutive employees, SEC used a two-level rating system in which individuals’ performance was rated as acceptable or unacceptable. According to agency management, SEC followed the definitions under Title 5 that are used by the rest of the government for differentiating between acceptable and unacceptable performance. However, SEC did not establish written performance standards for appraising employees’ performance as acceptable or unacceptable. To determine performance-based pay increase amounts for nonexecutive employees, SEC developed a second phase process that involved making distinctions in contributions for those individuals who received a summary performance rating of acceptable. As part of the second phase, employees and their supervisors submitted contribution statements summarizing the employees’ accomplishments during the appraisal cycle. Using the summary statements and the supervisors’ own assessments, supervisors placed employees into one of four categories: (1) made contributions of the highest quality, (2) made contributions of high quality, (3) made contributions of quality, and (4) made no significant contribution beyond an acceptable level of performance. Next, a compensation committee within each office or division evaluated the contribution statements and the supervisors’ placements. For each employee, the committee recommended a merit pay increase ranging from zero to 4.41 percent (corresponding to “steps” 0 to 3) to an official from each office or division, who made the final determination of the employee’s merit increase. However, SEC did not develop criteria to differentiate between the four contribution categories that the compensation committees considered when recommending merit pay step increase amounts. In addition, SEC employee representatives told us that it was not clear to employees how the contribution statements and the subsequent supervisory recommendations were translated into the decisions about the four contribution categories into which employees would be placed. SEC officials noted that employees received copies of narratives written by their supervisors to describe the employees’ contributions; however, they acknowledged that the system could be more transparent. According to SEC officials, in an effort to increase transparency in the future, they plan to share with employees information on supervisors’ preliminary recommendations on ratings that are provided to the compensation committee, so that employees can see into which of the four contribution categories they were recommended for placement and the supporting documentation. If the committee changes an initial recommendation from a supervisor, SEC will provide the employee with the rationale for the change. An agency official indicated they are developing broad statements, such as “the committee had a broader perspective of employee contributions,” that address a range of possible reasons for changes. The extent to which the financial regulators shared the overall results of performance ratings and pay increase decisions with all employees varied, and some agencies did not make this information widely available to employees. We have previously reported that the safeguard of communicating the overall results of performance appraisal and pay increase decisions while protecting individual confidentiality can improve transparency by letting employees know where they stand in the organization. An employee’s summary performance rating conveys information about how well an employee has performed against established performance standards, which is important, but not sufficient to provide a clear picture of how the employee’s performance compares with that of other employees within the organization. When the organization communicates where an employee stands, management can gain credibility by having honestly disclosed to the employee the basis for making pay, promotion, or developmental opportunity decisions that may have been based upon relative performance. The Federal Reserve Board communicated the overall results of the performance appraisal decisions to all employees by sharing annual performance rating distributions with all employees, disaggregated by division. Since this system for determining the amounts of performance- based increases for individuals based on their performance ratings is essentially driven by formula, employees know what their merit increases will be relative to others after receiving their performance ratings. At FDIC, the distribution of pay group assignments for all nonexecutive/ nonmanager employees who passed the first assessment process is fixed by the negotiated agreement with the union, so those employees know how performance-based pay increases will be distributed and the amounts of increases received by the various pay groups. Further, FDIC officials told us that, in accordance with the collective bargaining agreement, after completion of each annual pay for performance cycle they share data on the results of the pay grouping decisions for employees covered by the bargaining unit contract with union representatives. These include summary pay group data analyzed according to the agreement with the union, such as certain demographic data and individual rating information. According to an agency official, OCC began to post some limited information on the average size of some performance-based pay increases on the agency intranet in November 2006. The information included the average, agencywide percentage increases for merit increases, merit bonuses, and special act and spot awards, as well as the percentage of employees receiving the increases. During the performance appraisal cycle we reviewed, OTS shared with union representatives some data on average pay increases. The agency did not share ratings distribution data with the union, and did not make either performance-based pay increase or rating results information available to all employees. However, in November 2006, OTS distributed to all employees information for the recently completed appraisal cycle on the percentage of employees who received each performance rating level and the average pay increase percentages to be received by people at each level. The information was disaggregated by regions and Washington, D.C. While SEC did not make the results of performance rating decisions available to all employees, officials said that they reported information on performance awards (bonuses) to the union and that, under implementation of the compensation and benefits agreement currently being negotiated, they plan to publish aggregated information on performance ratings under the planned new performance management system for nonexecutive employees. SEC officials also told us that they plan to provide information at the lowest possible organizational level while still protecting individual confidentiality. The remaining five financial regulators did not share overall data on ratings or performance increases widely with all employees, although in some cases some information was shared with managers. The following outlines how information was shared: CFTC shared information on the results of ratings and award decisions with managers on a Pay Parity Governance Committee, but not with all employees, for the appraisal cycle we reviewed. CFTC officials told us that there is no prohibition against sharing this type of information under the new performance management system directive, and they are aware that there is some interest among employees in receiving it. They said that the pay parity committee will determine whether there is value in releasing this information to all employees in the future. At FHFB, an official told us that office directors see all the ratings within their offices and make the decisions about the performance-based pay increases for employees, but this information is not shared across offices or with all employees. However, the director of the Office of Supervision, FHFB’s largest office, has shared information with all staff in the office on the ranges of pay increases corresponding to different performance rating levels and base salary levels that were received by staff within the office for a given year, as well as the standards used to assign the merit increase amounts. Officials at OFHEO told us that just last year they started sharing information on the results of ratings and pay increase decisions with management, but that they have not yet shared this type of information with all employees. FCA officials told us that they do not share aggregate results of the performance rating and pay increase decisions with all employees. They explained that, under a previous administration, in early 2000, an executive summary was prepared and posted that all employees could potentially access, which contained information on the results of ratings and pay increases. However, this information was not broadly disseminated directly to employees. NCUA shares information on the results of the merit pay decisions with directors, but not with all employees. An NCUA official told us that it is up to the directors to decide whether or not to share this information with their staff. In comments on the draft report, NCUA explained that this is one of the issues involved in its current negotiations over pay and benefits with the National Treasury Employees Union, and that the agency’s proposal to the union does provide for this type of transparency. Agencies provided a variety of reasons for not sharing overall ratings and pay increase information more widely. Officials from FHFB and FCA told us that the relatively small size of their agencies, 122 and 248 employees, respectively, makes it harder to share this type of information while protecting individual confidentiality and that an FHFB official was not aware of employee demand for this type of information. FCA officials also mentioned that the emphasis in their performance management system is on rating individual employees against the standards, not against other employees and they wanted employees to focus on their individual ratings and performance. According to union representatives at OCC, the union has made multiple requests for data on the results of the performance rating and pay increase decisions but management has declined to share information that would enable the union to, in their words, perform a meaningful independent analysis of the ratings and pay increase decisions. OCC officials told us that they prefer not to share with employees disaggregated information on ratings and pay increase distributions because organizational units administer the process differently. For example, the percentages of individuals rated at the highest level (4) and next highest level (3) vary from unit to unit. Because units receive fixed pools of funds for performance-based increases, the average size of a merit increase that an employee receiving a level 4 may receive can vary from unit to unit, depending on how many individuals receive the highest rating. OCC officials told us that sharing information on average merit increases by unit with employees, without sufficient context of the factors considered when making these decisions, including more detailed rating information (which is privacy protected), could lead to misinterpretation of the data. However, not sharing information on the results of the performance rating and pay increase decisions processes can detract from the goal of a transparent and fair performance management system. This information needs to be presented in ways that protect individual confidentiality, such as by aggregating it. Without access to this type of information, individual employees can lose a valuable opportunity to understand how their performance stands relative to others in their organization. In cases where agencies negotiate agreements with unions, an important consideration is to reach agreement to share aggregate results of the rating and pay increase decisions with employees, while protecting individual confidentiality. While most of the financial regulatory agencies used multiple rating levels to assess employee performance and make distinctions in performance, at most agencies employees were concentrated in one or two rating categories and very few received poor performance ratings. By using multiple-level rating systems, agencies have the capability to make meaningful distinctions in performance. Effective performance management systems make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. As we have previously reported, performance management systems can provide management with the objective and fact-based information it needs to reward top performers and provide the necessary information and documentation to deal with poor performers. More specifically, using multiple rating levels provides a useful framework for making distinctions in performance by allowing an agency to differentiate, at a minimum, between poor, acceptable, and outstanding performance. We have reported that two-level rating systems by definition will generally not provide meaningful distinctions in performance ratings, with possible exceptions for employees in entry-level or developmental bands. Eight agencies used four or more rating levels. For example, as described earlier, OFHEO used a five-level rating category system to appraise employee performance and contributions toward achieving agency goals, and further distinguished between high and low performance scores within rating categories. As shown in figure 1, at the eight agencies with four- or five-level rating systems, the largest percentage of employees fell into the second highest rating category, except at OFHEO and the Federal Reserve Board. At OFHEO, more than half of the employees were placed into the high or low levels of the top rating category. Conversely, at the Federal Reserve Board (excluding economists), almost half of the employees fell into the third highest or middle (commendable) rating category. Across the eight agencies shown in figure 1, the percentage of employees who fell into the highest rating category varied from 10.6 percent for economists at the Federal Reserve Board, to 55 percent of employees at OFHEO. (This page is left intentionally blank.) SEC and FDIC used two-level rating systems (essentially pass/fail systems) to appraise the performance of certain groups of employees. Although two- level rating systems by definition will not provide meaningful distinctions in performance ratings, both SEC and FDIC used a second process to determine performance-based pay increases and effectively make more meaningful performance distinctions. As figure 2 shows, the highest percentage of employees at FDIC fell into the second highest of four categories, in keeping with the fixed percentages included in the negotiated agreement with the union. At SEC, the largest percentage of employees fell into the third highest of four rating categories. As shown in figures 1 and 2, the percentage of employees rated as poor performers at each agency was very small during the completed performance appraisal cycles we reviewed. Employees rated at below the successful and meets expectations rating levels accounted for less than 3 percent of employees across the agencies. OTS had zero employees in the bottom two rating categories combined—all OTS employees received fully successful or higher ratings. Similarly at NCUA, no executives and 2.1 percent of nonexecutives were rated below minimally successful. While the financial regulators rated very few employees as poor performers, all of the agencies have established procedures to deal with poor performers. When an employee does not perform up to a threshold standard for satisfactory performance, most agencies place the employee on a performance improvement plan or provide counseling for the employee, and the employee does not receive a performance-based increase at the end of the performance cycle. For example, OTS has addressed poor performance by working with the employee to improve his or her area of deficiency. An employee who receives a rating at the unacceptable level is placed on a performance improvement plan for a minimum of 90 days. Specifically, OTS policy advises supervisors to develop a performance improvement plan by identifying the performance areas in which the employee is deficient and the types of improvements, including specific work products and steps to be followed which the employee must complete to attain the fully successful performance level. In addition, according to OTS policy, the agency may provide the employee with closer supervision, or on-the-job or formal training. However, governmentwide, 29.7 percent of employees indicated in the 2006 Office of Personnel Management (OPM) Federal Human Capital Survey that they agreed or strongly agreed that differences in performance within the work unit were recognized in a meaningful way. Positive responses to this question for the eight financial regulators who participated in the survey ranged from 24.9 percent for CFTC to 41.6 percent for OCC. None of these agencies had a majority of their employees provide positive responses to this question, and only three of the eight agencies had more than one third of their employees provide positive responses to this question. While it may have been an isolated incident, for senior officers, SEC effectively did not make distinctions in rating their performance during the appraisal cycle we reviewed because the agency did not complete performance ratings for them in 2005. According to SEC officials, no funds were available for performance-based bonuses (which are normally dependent on performance ratings) during that assessment cycle. As a result, divisions performed assessments of senior officers, but the assessment process was not completed and their ratings were not signed by the Chairman for the October 1, 2004, to September 30, 2005, performance appraisal cycle. A recent SEC Inspector General report confirmed that senior officers in SEC’s Enforcement Division did not prepare performance review documents for the performance cycle that ended on September 30, 2005, and recommended that required steps of the senior officer performance appraisal process be conducted in accordance with Commission policy, even when merit increases are not awarded. All senior officers received annual across-the-board salary increases during that cycle. Conducting performance appraisals and making distinctions in performance are important not only for determining performance-based pay increases, but for providing feedback to help employees improve their performance and assess how their work contributed to achieving organizational goals. By not appraising their performance, SEC missed an opportunity to provide valuable feedback to senior officers. Financial regulators have hired external compensation consultants to conduct individual, formal comparability surveys, exchanged pay and benefits information, explored the feasibility of conducting a common survey, and adjusted pay and benefits to seek to maintain pay and benefits comparability. The majority of the financial regulators conducted pay comparability surveys that have included other financial regulators and in some instances, private-sector entities. To compare pay across agencies, consultants send questionnaires on behalf of the sponsoring agency and ask participating agencies to match the jobs based on the job descriptions provided. To compare benefits, consultants use various methods, such as side-by-side comparisons of benefits and calculation of total cost of benefits per employee. In addition to these surveys, human capital officials at the 10 financial regulators have formed an interagency group to exchange information and consult on topics such as updates on merit pay ranges and bonuses. However, agency officials told us that because many of the financial regulators conduct separate comparability surveys, their staffs have to respond to numerous and often overlapping inquiries, which can be inefficient. To begin addressing the inefficiencies of this process, the agencies formed a subcommittee in December 2006 to study the feasibility of conducting a common survey on pay and benefits. According to agency officials, the subcommittee also has discussed the feasibility of establishing a Web-based data system to make the most current pay and benefits information available to participating agencies. In the absence of a legislative definition of what constitutes comparability, agency officials told us that they use various methods to assess pay and benefits comparability after they have obtained relevant data from the other agencies. For example, FDIC has sought to set its total pay ranges (base pay plus locality pay) for specific occupations and grade levels within 10 percent of the average of FIRREA agencies. FCA used the average market rate paid by other financial regulators as a benchmark. Finally, partly on the basis of the results of the comparability surveys and discussions among the agencies, the financial regulators have adjusted their pay and benefits policies in their efforts to seek to maintain comparability. For example, as a result of gaining pay flexibilities, CFTC implemented new pay ranges for its 2003 pay schedule, and increased base pay by 20 percent for all eligible employees to partially close the 25 percent gap between CFTC and FIRREA agencies. Appendix IV provides additional information on our analysis of individual agency actions. While the regulators have taken actions to seek to maintain comparability in their pay and benefits, there are some variations in base pay ranges and benefit packages among the agencies. Figure 3 shows the base pay ranges (minimum and maximum) for the mission-critical occupations, excluding executives at the 10 agencies. As shown in the same figure, the actual average base pay among the 10 agencies also varies somewhat in relation to the agencies’ respective base pay ranges, which according to agency officials, could be affected by the average length of service of employees, and the fact that some agencies tend to hire employees at certain grade levels. Because each financial regulator sets its own locality pay percentage based on its respective policies, locality pay percentages often differ from those that OPM sets for General Schedule employees (with the exception of CFTC) and vary among agencies for the same duty station. For example, in New York City, the OPM locality pay percentage is 22.97 percent but the regulators’ locality pay percentages range from 21.19 at FDIC and FHFB to 33.20 percent at OTS. Table 2 shows the locality pay percentages for OPM and for the eight financial regulators that have locality pay percentages for selected cities. The benefits that the 10 financial regulators offered also varied, which we discuss in detail in appendix IV. For example, half of the regulators offer their employees 401(k) retirement savings plans with varying employer contributions in addition to offering the governmentwide Federal Thrift Saving Plan (except for the Federal Reserve Board). According to agency officials, factors such as the year an agency first became subject to comparability provisions, budget constraints, the needs and preferences of different workforces, and ways to attract and retain workforces play a role in compensation decisions and contribute to the variations in pay ranges and benefits. Moreover, agency officials emphasized that it was not their goal to have identical pay and benefits packages; rather, they considered pay and benefits as a total package when seeking to maintain pay and benefits comparability and when setting pay policies aimed at recruiting and retaining employees. While the total number of financial regulatory employees resigning from federal employment between fiscal years 1990 and 2006 generally declined, there was no clear trend among the number who moved to another financial regulator. As shown in figure 4, the number of employees leaving one federal regulator for another declined from the previous fiscal year in 10 of the 16 years and increased from the previous fiscal year in the other 6 years. Figure 4 also shows the percentage of financial regulatory employees who went to another financial regulator, went to other federal agencies, and resigned from federal employment, and the total number of financial regulatory employees during this period. Of all the employees who left their financial regulatory agency voluntarily (moved to another financial regulator or executive branch agency, or resigned) from fiscal year 1990 through fiscal year 2006, the vast majority—86 percent (13,433)— of the 15,627 employees leaving the regulators voluntarily (i.e., moved or resigned), resigned from the federal government. The number of employees who moved to another financial regulator ranged from a low of 16 of 1,362 who moved or resigned in fiscal year 1997 to a high of 97 of 1,229 who moved or resigned in fiscal year 1991. The total number of financial regulator employees was 15,400 and 19,796 during those 2 years, respectively. Similar lows were also experienced in 1996 and 2003. Some agency officials told us that they believe that the FIRREA comparability provision and similar provisions in subsequent laws applicable to financial regulators have been effective in ensuring that regulators’ pay and benefits are generally comparable among the 10 agencies, which probably helps minimize employee movement among financial regulatory agencies. Of the financial regulator employees who moved or resigned, the percentage of those who resigned from federal employment fluctuated slightly over the period, ranging from a low of 73.7 percent in fiscal year 2003 to a high of 94.8 percent in fiscal year 1996. The movement of mission-critical employees among financial regulators also did not reveal a discernible trend. For the number of employees who moved to another financial regulator from fiscal year 1990 through fiscal year 2006, see table 10 in appendix V. The numbers ranged from no movement for 7 of the 11 occupational categories (accountants, auditors, business specialists, economists, financial analysts, investigators, and information technology (IT) specialists) in at least 1 of the fiscal years we reviewed to a high of 37 employees (38.1 percent of all those who moved that fiscal year) for the “all other” occupational category in fiscal year 1991. During this period (fiscal years 1990 to 2006), some occupational categories experienced very little movement. For example, fewer accountants, auditors, business specialists, and investigators moved than employees in the other categories. In contrast, examiners had the largest number of employees moving among financial regulators in 8 of the 17 years, including the 3 most recent years for which data were available. The average number of employees in mission-critical occupations moving among the 9 financial regulators from fiscal year 1990 through fiscal year 2006 ranged from 0.1 for investigators to 11.7 for examiners. See appendix V for additional data on employee movement. For those employees that did not move to another financial regulator, we could not determine in all cases where the employees moved because CPDF, the most complete data set available with federal employment information, does not include information on employment outside executive branch agencies. We were able to identify those employees that went to another federal agency. These numbers ranged from a low of 48 in fiscal year 1994 to a high of 128 in fiscal year 1991, higher than the number of employees who moved to another financial regulator, which was 23 in fiscal year 1994 and 97 in fiscal year 1991. Officials from the 9 agencies told us that they do not track the employment of their employees after the employees leave their agencies. Further, they said that their employees generally sought employment outside the federal government, including the private sector and state and local government, but that their main competitors were private-sector entities. Like other federal agencies, the experiences of the financial regulators illustrate the challenges inherent in establishing well-functioning, performance-based pay systems and that these systems are works in progress that are constantly evolving. These regulators have taken various approaches to revise their performance management systems and introduce performance-based pay. Although the regulators have incorporated many of the key practices for effective performance management systems, opportunities exist for a number of them to make improvements as they continue to refine their systems. Specifically, some regulators have opportunities in the areas of strengthening safeguards to enhance transparency and fairness and making meaningful distinctions in performance. As some regulators develop new systems or revise their existing systems, they have an opportunity to build in aspects of the key practices, such as improving transparency by communicating the overall results of performance appraisal rating and performance-based pay increase decisions to all employees to help employees understand how they performed relative to other employees in their organization, while protecting individual confidentiality. For regulators that negotiate with unions, there are also opportunities to work together to accomplish this. SEC has some additional opportunities to pursue improvements in specific aspects of its performance management system, which it is in the process of revamping. For example, SEC can establish and communicate to nonexecutive employees using the new system clear criteria for making performance rating and pay increase decisions. Finally, while it may have been an isolated incident, by not completing performance assessments of senior officers in the 2005 performance appraisal cycle we reviewed, SEC missed an opportunity for two-way feedback and assessments of individual and organizational progress toward organizational goals. While funding circumstances specific to that appraisal cycle contributed to this situation, in the future it will be important to complete assessments regardless of the availability of funding for increases. The agencies have taken a variety of actions in seeking to maintain pay and benefits comparability. While we did find some variation in base pay ranges, locality pay percentages, actual average pay, and benefits among the agencies, we found that a number of reasons could contribute to the variation, including the following: regulators were granted flexibility under Title V and subject to comparability requirements at varying times, pay and benefits are considered comprehensively in seeking comparability, the average length of service of employees, and where employees are located. While pay and benefits comparability cannot be precisely determined, all the agencies are working to maintain comparability in pay and benefits. One recent initiative—studying the feasibility of conducting a common survey on pay and benefits—should help to increase the efficiency of this effort. In addition, given the relatively small amount of employee movement among federal regulators, the variation in pay, benefits, and locality pay percentages in some locations across the regulators does not appear to be encouraging large numbers of employees to move among financial regulators. This may be an indication that the comparability provisions of FIRREA and other pertinent legislation have been working as intended. Moreover, from fiscal years 1990 through 2006, the agencies’ attrition rates have trended downward indicating that a smaller percentage of employees were leaving. The Chairman of the Board and Chief Executive Officer of the Farm Credit Administration, the Chairman of the Federal Housing Finance Board, and the Director of the Office of Federal Housing Enterprise Oversight should communicate the overall results of the performance appraisal and pay increase decisions to all employees agencywide while protecting individual confidentiality. The Chairman of the National Credit Union Administration and the Chairman of the Commodity Futures Trading Commission should work with unions to communicate the overall results of the performance appraisal and pay increase decisions to all employees agencywide while protecting individual confidentiality. The Chairman of the Securities and Exchange Commission should communicate clearly the criteria for making performance rating and pay increase decisions to nonexecutive employees work with the union to communicate the overall results of the performance appraisal and pay increase decisions to all employees agencywide while protecting individual confidentiality and assess senior executives’ performance at the end of the performance appraisal cycle regardless of the amount of funding available for performance-based pay increases. We provided drafts of this report to the Chairman, Commodity Futures Trading Commission; Chairman of the Board and Chief Executive Officer, Farm Credit Administration; Chairman, Federal Deposit Insurance Corporation; Chairman, Federal Housing Finance Board; Chairman, Board of Governors of the Federal Reserve System; Chairman, National Credit Union Administration; Comptroller of the Currency, Office of the Comptroller of the Currency; Director, Office of Federal Housing Enterprise Oversight; Director, Office of Thrift Supervision; and Chairman, Securities and Exchange Commission; for review and comment. We received written comments from six of the agencies. See appendixes VI, VII, VIII, IX, X, and XI for letters received from CFTC, the Federal Reserve Board, FHFB, NCUA, OFHEO, and SEC. These six, along with the other four agencies, also provided clarifying and technical comments, which we incorporated as appropriate. The agencies generally agreed with our recommendations. With respect to the recommendation to communicate the overall results of the performance appraisal and pay increase decisions on an agency-wide basis, CFTC, FCA, FHFB, NCUA, OFHEO, and SEC indicated that they plan to implement the recommendation. In describing specific actions, the executive director of CFTC explained that the agency has already discussed working with the unions to communicate overall results of performance appraisal and pay decisions across the agency as part of the development of their new performance management and pay-for- performance systems. The Chief Human Capital Officer of FCA stated that the agency plans to communicate the overall results of the 2006 performance appraisal and 2007 pay increase decisions to FCA employees by the end of June 2007. The Executive Director of NCUA explained that sharing overall information on ratings and pay increase decisions with all employees is one of the issues being negotiated as part of the ongoing negotiations over pay and benefits with the National Treasury Employees Union, and stated that the agency’s proposal to the union provides for this type of transparency. The Executive Director of SEC agreed with the report findings and stated that SEC has established a new branch within the Office of Human Resources to oversee performance-related issues and has launched a new pilot performance management system that will address the recommendations. Finally, the Acting Director of FHFB and the Chief Human Capital Officer of OFHEO also stated that their respective agencies will implement the recommendation. We will send copies of this report to the appropriate congressional committees; the Chairman, Commodity Futures Trading Commission; Chairman of the Board and Chief Executive Officer, Farm Credit Administration; Chairman, Federal Deposit Insurance Corporation; Chairman, Federal Housing Finance Board; Chairman, Board of Governors of the Federal Reserve System; Chairman, National Credit Union Administration; Comptroller of the Currency, Office of the Comptroller of the Currency; Director, Office of Federal Housing Enterprise Oversight; Director, Office of Thrift Supervision; Chairman, Securities and Exchange Commission; and other interested parties. We will make copies available to others upon request. The report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact Orice M. Williams at (202) 512-8678 or williamso@gao.gov or Brenda Farrell at (202)512-5140 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XII. The objectives of this report were to (1) review how the performance- based pay systems of 10 federal financial regulatory agencies are aligned with six key practices for effective performance management systems, (2) review actions these 10 agencies have taken to assess and implement comparability in compensation, and (3) review the extent to which individuals in selected occupations have moved between or left any of the agencies. These agencies are the Commodity Futures Trading Commission (CFTC), the Farm Credit Administration (FCA), the Federal Deposit Insurance Corporation (FDIC), the Federal Housing Finance Board (FHFB), the Board of Governors of the Federal Reserve System (the Federal Reserve Board), the National Credit Union Administration (NCUA), the Office of the Comptroller of the Currency (OCC), the Office of Federal Housing Enterprise Oversight (OFHEO), the Office of Thrift Supervision (OTS), and the Securities and Exchange Commission (SEC). To address our first objective, we analyzed documents on the regulators’ performance management and pay systems, including guidance, policies, and procedures on the systems; performance planning and appraisal forms; union contracts and agreements; training materials; internal evaluations of systems; and materials used to communicate with employees about the systems. We also reviewed documents assessing the agencies’ systems, including results from the 2006 Federal Human Capital Survey conducted by the Office of Personnel Management (OPM), recent human resources operations audits performed by OPM, and relevant material from agencies’ offices of inspector general. We also interviewed key human resources officials at each agency, as well as officials from other functional areas knowledgeable about each agency’s performance-based pay practices. In addition, we interviewed employees at the agencies who served as members of employee groups. At six of the agencies, the employees we spoke with were union representatives. Specifically, employees at FDIC, OCC, NCUA, and SEC are represented by the National Treasury Employees Union, and OTS headquarters staff and CFTC staff at two regional offices are represented by the American Federation of Government Employees. Employees at FCA, the Federal Reserve Board, FHFB, and OFHEO did not have a union; at these agencies we spoke with employees who served on employee committees or working groups. In addition, we examined small, select sets of individual performance plans for employees, which outline the annual performance expectations for employees. The selection of these performance plans was not intended to allow us to make generalizations about all performance plans at the agencies, and we have used information from the plans for illustrative purposes only. The performance plans we received were selected by agency officials based on our request for a mix of occupations and grade levels at each agency. The smallest number of performance plans we examined from an agency was one, in a case where the performance plans for all employees are completely standardized. The largest number of plans we reviewed from an agency was 32. The individual performance plans we examined pertained to each agency’s last completed performance appraisal cycle when we began this review. Table 3 shows the appraisal cycle by agency. Finally, we analyzed data from each agency on performance ratings and performance-based pay awarded to employees as well as aggregate data on all types of pay increases at each agency not linked to performance ratings. We used these data to calculate the Spearman rank correlation coefficient to show the strength of the relationship between employee performance ratings and the associated performance-based percentage pay increases at each agency. In computing the correlation coefficients, we noted that a few agencies used a table or procedure that specified particular pay increases corresponding to specific ratings. Taken in isolation, the use of the table or procedure would be expected to produce a perfect correlation, i.e., +1.0. However, other aspects of these agencies’ systems contributed to the resulting coefficients being less than +1.0. For example, at one agency, employees with rating scores below a certain threshold were not eligible for any pay increase. While these employees may have had different rating scores, none of them received a pay increase, which contributed to a coefficient that was less than perfect. Other mechanistic factors in these agencies’ systems, such as adjusting or changing the specified percentage pay increase based on the grade level or current salary of the employee, also had the effect of producing a less than perfect coefficient at these agencies. Given the influence that these procedural but nondiscretionary variations may have had on the resulting coefficients at these agencies, the coefficients are primarily useful in their overall demonstration of the positive linkage between ratings and pay increases at all the agencies, and the range of coefficients that occurs. The magnitude of the coefficients, however, is not sufficient for ranking the agencies or making other types of comparisons. We also analyzed agency data on performance ratings to determine the distribution of employee performance ratings at each agency. All data were provided to us by agency officials, and pertained to the performance appraisal cycles noted in table 3. To address our second objective, we first analyzed the content of compensation comparability provisions in the agencies’ laws and related legislative histories. We reviewed the most recent pay and benefits surveys conducted by external compensation consultants for these agencies, obtained agency pay and benefits data, and analyzed actual pay data from CPDF. In addition, we interviewed agency officials about their experience with these surveys and the agencies’ informal interactions to assess pay comparability and to determine the feasibility of conducting a common survey. To report on the pay ranges for non-executive employees in selected occupations, we analyzed the base pay data provided to us for mission- critical occupations at nine of the agencies in our review. We selected the mission critical occupations by: (1) identifying nonclerical and nonblue- collar occupations with 45 or more employees in at least one financial regulatory agency and (2) vetting this list with the 10 agencies. The agencies provided us with pay range information as set forth in each agency’s pay policies as of September 2006 for every job title under each occupational category, including jobs with no incumbents at the time the agencies reported the data to us. To report on the actual average base pay of employees in the selected occupations, we analyzed actual pay data from CPDF for fiscal year 2006. Because the CPDF does not include data for all agencies, the Federal Reserve Board provided us with actual pay data for our analysis of its employees' actual average pay for fiscal year 2006. To show the financial regulators’ locality pay percentages and general schedule employees’ locality pay percentages, we selected the cities where four or more financial regulators had duty stations in fiscal year 2006. We obtained fiscal year 2006 locality pay percentages information from the financial regulators and general schedule locality pay percentages from the OPM Web site. To report on the benefits offered by the agencies, we obtained and analyzed data from each agency that included a list of benefits the financial regulators offered as of September 2006 and brief descriptions of each benefit. We also interviewed agency officials about the factors that affect the actual average base pay, and how each agency sets its locality pay percentage. To address our third objective, we analyzed movement data from CPDF for fiscal years 1990 to 2006, the most recent available data as of December 2006. For each fiscal year, we identified the number of employees in selected mission-critical occupations at a financial regulator who (1) moved to another financial regulator, (2) moved to other federal agencies, and (3) resigned from federal employment. We identified those who moved from one financial regulatory agency to another by identifying employees who had a CPDF separation code for a voluntary transfer and who also had a CPDF accession code from another financial regulatory agency within 25 days of the transfer out. Also, for each mission-critical occupation, we examined the number of financial regulator employees who moved to another financial regulator in each fiscal year and the average number of employees who moved among the nine financial regulators over the 16 years of our review. Our analysis of supervisors included executives, who constituted 1 to 2 percent of all supervisors who moved to another financial regulator. We also included all other agency occupations that were not classified as “mission-critical occupations” in an “all other” category, which includes occupations such as specialists in human resources management, administration, clerical, management and program analysis, blue collar occupations, financial administration, and paralegal work. We did not include the Federal Reserve Board in our analysis of the movement of financial regulator employees because CPDF does not include data on the Federal Reserve Board. Federal Reserve Board officials told us that data on employee movement for fiscal years 1990 to 1996 are not readily accessible. The agency provided us some data for fiscal years 1997 to 2005, including data on employees who transferred, resigned, were fired, were subject to a reduction in force, or otherwise separated, and the agency’s total number of employees, but was unable to identify whether their employees left for another financial regulator. Because the data the agency provided were not comparable with the CPDF data we used for the other financial regulators, we did not include the Federal Reserve Board in our analysis. We also did not include information on the employment of financial regulatory employees after they left federal employment because CPDF does not include data on employment outside some agencies and officials told us that they do not track the employment of their employees after the employees leave their agencies. We assessed the reliability of the various sets of data used in our study. To assess the reliability of the performance and pay increase data provided by the agencies, we conducted various inspections and electronic testing of the data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness. We also reviewed related agency documentation, interviewed agency officials knowledgeable about the data, and brought to the attention of these officials any concerns or discrepancies we found with the data for correction or updating. Based on the results of these procedures, we believe the data are sufficiently reliable for use in the analyses presented in this report. We did not independently verify the pay and benefits data we received from the agencies but consider these data sufficiently reliable for the illustrative purpose of our review. Based on our data reliability testing of CPDF data, we believe the CPDF data are sufficiently reliable for this review. When analyzing employee movement using CPDF data, we found exceptions from standard personnel procedures, such as employees with a transfer-out code but with an accession code in the hiring agency that was not a transfer-in code, or employee records with transfer-out and transfer-in dates that exceeded 3 calendar days. We also found duplicate separation or accession records for the same individual on the same day. We deleted one of the duplicate records. We also found cases where an individual had two separation actions on the same day but they were different types of actions (e.g., a transfer out and a resignation). Because we could not determine which separation action was the correct one, we deleted both records. However, these types of data problems represented less than one-tenth of 1 percent of the data used. As a result, we concluded that the data were sufficiently reliable to show the magnitude of movement between financial regulatory agencies, to other federal agencies, and to nonfederal employers. We conducted our work from February 2006 through June 2007 in accordance with generally accepted government auditing standards. High-performing organizations have recognized that a critical success factor in fostering a results-oriented culture is a performance management system that creates a “line of sight” showing how team, unit, and individual performance can contribute to overall organizational goals and helping employees understand the connection between their daily activities and the organization’s success. Effective performance management systems are essential for successfully implementing performance-based pay. In the letter, we addressed important aspects of how 10 financial regulatory agencies have implemented two key practices: (1) linking pay to performance and (2) making meaningful distinctions in performance. This appendix provides detailed information on the financial regulators’ implementation of four additional key practices important for effective performance management systems, as well as some additional material pertaining to the linking pay to performance practice covered in the letter. The four additional practices are: Align individual performance expectations with organizational goals. Connect performance expectations to crosscutting goals. Use competencies to provide a fuller assessment of performance. Involve employees and stakeholders to gain ownership of performance management systems. The 10 financial regulatory agencies have implemented these four key practices for effective performance management systems in various ways, reflecting the unique needs of their organizational cultures and structures. The 10 federal financial regulatory agencies have implemented the practice of alignment in a variety of ways. An explicit alignment of daily activities with broader results is a key feature of effective performance management systems in high-performing organizations. These organizations use their performance management systems to improve performance by helping individuals see the connection between their daily activities and organizational goals and encouraging individuals to focus on their roles and responsibilities in helping to achieve these goals. The financial regulators reinforced alignment of individual performance expectations to organizational goals in policy and guidance documents for their performance management systems, used standardized performance elements or standards for employees in their performance plans, used customized individual performance expectations that contributed to organizational goals in individual performance plans, and included the corresponding organizational goals directly on the individual performance plan forms. Several of the financial regulatory agencies, including FDIC, OCC, FHFB, and OFHEO, have reinforced alignment by including language on linking individual performance expectations to organizational goals in policy and guidance materials for the performance management systems. The following are examples of how selected agencies have reinforced alignment through policies and guidance. A key objective of FDIC’s performance management program as stated in a policy directive is to “establish fair and equitable performance expectations and goals for individuals that are tied to accomplishing the organization’s mission and objectives.” The directive further states that employees at FDIC are assessed against performance criteria, which are defined as “the major goals, objectives, and/or primary responsibilities of a position which contribute toward accomplishing overall organizational goals and objectives” (as found in FDIC’s strategic plan and annual performance plan). At OCC, the Policies and Procedures Manual for the performance management system states that the system is designed to align employee performance expectations with organizational objectives and priorities. The manual also explains that the starting point for identifying individual performance expectations should be unit objectives established at the executive committee, district, field office, or division level. The handbook and guide for FHFB’s and OFHEO’s performance management systems, respectively, contain several references to alignment of individual expectations to organizational goals. Several of the financial regulators, including FCA, CFTC, FHFB, OCC and OTS, have reinforced alignment by including standardized performance elements or performance standards that link performance expectations to organizational goals in employees’ performance plans. We have previously reported that results-oriented performance agreements can be effective mechanisms to define accountability for specific goals and to align daily activities with results. Individuals from the agencies with standardized performance elements in their individual performance plans are assessed against the same set of performance elements and standards at the end of the appraisal cycle, as the following examples illustrate. FCA has included a requirement to contribute to the achievement of organizational goals in standardized performance elements for all employees in their individual performance plans. Specifically, FCA has developed a set of standardized performance elements for each of its four occupational groups and in some of these elements, requires individuals to contribute to achieving organizational goals and objectives. For the senior manager’s occupational group, individuals have a standardized performance element—“Leadership and Motivation Skills”—in their individual performance plans that measures the employees’ ability to accomplish the agency’s goals and objectives. For the other three occupational groups, individuals have a standardized performance element—“Teamwork and Interpersonal Skills”—in their individual performance plans that measures the extent to which the employee places emphasis on achieving organizational and team goals. In this way, all employees at FCA are assessed on the extent to which they contribute to organizational objectives through a standardized performance element. While not requiring a standardized performance element related to alignment in the individual performance plans for all employees, CFTC has reinforced alignment through the performance standards used for rating all employees at the end of the performance appraisal cycle. Specifically, in order for all employees to achieve the highest summary performance rating, individuals must “achieve element objectives with extensive impact on organizational mission,” which reinforces the line of sight between individual performance and organizational results. In this way, for all employees at CFTC, the individual’s contributions to organizational goals affect his or her ability to achieve the highest possible performance rating. Alignment is further reinforced for managerial employees at CFTC because they are also assessed on the standardized performance element of “Effective Leadership,” which requires them to, among other things, accomplish the mission and organizational goals of the work unit, and communicate organizational goals to subordinates. FHFB has reinforced alignment in standardized performance elements for several occupational groups. Standardized elements for executives, managers/supervisors, staff attorneys, and professional positions contain references to aligning with or contributing to organizational goals. OCC has applied an alignment focus in a generic performance standard for four occupational groups at the agency. Executives, managers, commissioned examiners, and specialists are all rated against a standardized performance standard that requires them to contribute to organizational goals in order to get the highest rating level of 4 for a particular performance element. For example, managers have a standardized performance element called “leadership skills,” for which the highest level performance standard includes language on meeting OCC goals and objectives. Commissioned examiners and specialists have a standardized performance element in their individual performance plans called “organizational skills,” with an accompanying performance standard that requires individuals’ work products to be closely aligned with OCC’s goals, objectives, and priorities in order to receive the highest rating level. OTS has reinforced alignment in a standardized performance element for managers and senior managers. Under the “Leadership Skills” standardized performance element, managers are assessed on accomplishing the agency’s goals and objectives, taking initiative and incorporating organizational objectives into the organization, and scheduling work assignments. In addition, senior managers have a supplemental performance element that holds them responsible for supporting the achievement of OTS’s strategic plan. An OTS official stated that the agency is considering expanding the requirement for alignment as it makes future changes to the performance management system. Several financial regulatory agencies, including SEC, OCC, and the Federal Reserve Board, have reinforced alignment for some individual employees through customized performance expectations specific to individuals that link to higher organizational goals. We have reported that high-performing organizations use their performance management systems to improve performance by helping individuals see the connection between their daily activities and organizational goals and encouraging individuals to focus on their roles and responsibilities to help achieve these goals. One way to encourage this is to align performance expectations of individual employees with organizational goals in individual performance plans. We reviewed a small, select set of individual performance plans from each agency, and identified the following examples of individual performance expectations that linked to higher organizational goals. The performance plan for a senior officer at SEC included the performance expectation “Plans and Coordinates Inspection Programs and Ensures that Internal Management Controls Exist and Operate Effectively” that supports SEC’s strategic goal to “Maximize the Use of SEC Resources.” In individual performance plans, OCC has used customized performance expectations unique to the individual in addition to standardized performance elements to appraise employees. Specifically, the performance plan for an information technology (IT) specialist included a customized expectation to provide timely, professional, and quality IT support to promote efficient utilization of OCC resources. This expectation supported the annual OCC objective—“OCC reflects an efficient and effective organization.” At the Federal Reserve Board, a performance plan for an economist contained a performance expectation to produce a weekly monitoring report on Japan and cover Japanese banking and financial issues, which contributed to one of the Board’s annual performance objectives in the area of monetary policy function: “contribute to the development of U.S. international policies and procedures, in cooperation with the U.S. Department of the Treasury and other agencies.” FHFB and OCC have reinforced the linkage between the individual’s performance expectations and organizational goals by including the corresponding organizational goals directly on the individual performance plan forms. This helps make clear the line of sight between the employee’s work and agency goals, as the following examples illustrate. FHFB has included the agency mission statement and office mission statement to which an employee is contributing at the top of the first page of the performance plan form. In many of the individual performance plans we examined from OCC, the annual OCC objective to which each customized performance element contributed was listed on the form, along with performance measures. According to an official, while OCC’s performance management policy does not specifically require that the higher organizational objective to which each customized performance element contributes be listed on the employee’s performance evaluation form, managers are advised to include the organizational goals and the majority of forms do include them. The official stated that it was an oversight not to include this requirement in the policy, and they plan to revise the performance evaluation form to include space for the corresponding organizational objectives. Figure 5 shows an example of how a customized performance element on an individual performance plan is linked to an agency goal, clarifying the relationship between individual and organizational performance. The financial regulatory agencies have connected performance expectations to crosscutting goals in several ways. As public sector organizations shift their focus of accountability from outputs to results, they have recognized that the activities needed to achieve those results often transcend specific organizational boundaries. We reported that key characteristics of high-performing organizations are collaboration, interaction, and teamwork across organizational boundaries. High- performing organizations use their performance management systems to strengthen accountability for results, specifically by placing greater emphasis on those characteristics fostering the necessary collaboration, both within and across organizational boundaries, to achieve results. The specific ways in which the financial regulatory agencies have connected performance expectations to crosscutting goals vary. In our review of a small, select set of performance plans from some of the agencies, we identified some examples of customized individual performance plans that identified crosscutting goals that would require collaboration to achieve, as well as either the internal or external organizations with which the individuals would collaborate to achieve those goals. All of the agencies recognized the importance of collaboration by including performance elements for collaboration or teamwork within and across organizational boundaries in individual performance plans for at least some employees. Several agencies applied standardized performance elements related to teamwork or collaboration to employees. We found examples of performance plans customized to individuals at OCC, FCA, the Federal Reserve Board, and SEC that identified crosscutting goals, as well as either the internal or external organizations with which the individuals would collaborate to achieve these goals. We have reported that more progress is needed to foster the necessary collaboration both within and across organizational boundaries to achieve results. One strategy for fostering collaboration is identifying in individual performance plans specific programmatic crosscutting goals that would require collaboration to achieve. Another strategy for fostering collaboration is identifying the relevant internal or external organizations with which individuals would collaborate to reinforce a focus across organizational boundaries in individuals’ performance plans, as the following examples illustrate. At OCC, an employee had an expectation in his individual performance plan to enhance the division’s ability to work cooperatively and effectively together with other operational risk divisions, as well as enhance coordination with federal and state agencies and outside banking groups to promote consistency and to advance OCC viewpoints, while contributing to OCC’s objective for U.S. and international financial supervisory authorities to cooperate on common interests. A senior manager at FCA had a customized expectation in his individual performance plan to work closely with and coordinate Office of Examination initiatives with other offices, notably the Office of General Counsel and Office of Public Affairs, to support the FCA Chairman and Chief Executive Officer’s three strategic goals, which are (1) improving communications and relationships with the Farm Credit System, (2) gaining greater efficiency and effectiveness of the agency, and (3) promoting the Farm Credit System to become the Premier Financier of Agriculture and Rural America. An executive at the Federal Reserve Board had an expectation in his individual performance plan to undertake expanded discussions with SEC on information-sharing, cooperation, and coordination with the aim of strengthening consolidated supervision and achieving consistency in the implementation of Basel II. At SEC, a senior officer in the market regulation division had an expectation in his individual performance plan to advance market regulation objectives through cooperative efforts by coordinating with other SEC offices, other U.S. agencies, self-regulatory organizations, international regulators, and the securities industry. All of the financial regulators included performance elements related to collaboration or teamwork within and across organizational boundaries in individual performance plans for at least some of their employees. Performance elements related to collaboration or teamwork in individual performance plans can help reinforce behaviors and actions that support crosscutting goals and provide a consistent message to all employees about how they are expected to achieve results. CFTC, FHFB, NCUA, and the Federal Reserve Board provide examples of how standardized performance elements pertaining to teamwork or collaboration have been applied to employees. CFTC has established a standardized performance element for all employees that emphasizes collaboration or teamwork, called “Professional Behavior,” which requires employees to behave in a professional and cooperative manner when interacting with coworkers or the public and willingly initiate and respond to collaborative efforts with coworkers, among other things. At FHFB, all employees have performance elements or standards related to collaboration or teamwork in the standardized performance plans for their occupational groups. For example, the standardized performance plan for executives includes a performance element for “teamwork” that requires executives to collaborate effectively with associates and promote positive and credible relations with associates, among other things. The standardized performance plan for administrative positions also includes a “teamwork” performance element. For the other three occupational groups, collaboration or teamwork is captured in a performance standard. For example, the standardized performance plans for professional positions and managers/supervisors have a performance element that emphasizes collaboration or teamwork, called “Professionalism,” which requires the employee to develop and maintain effective working relationships with all employees at all levels throughout the agency and external to the agency and foster effective internal and external communication, among other things. NCUA has performance elements related to collaboration or teamwork in the standardized individual performance plans for some occupational groups, such as examiners. For example, in the standardized performance plan for some examiners, there is a performance element for “customer service and teamwork” that requires the individual to demonstrate initiative, responsibility, and accountability to both internal and external customers and work in collaboration with coworkers and others toward common goals. NCUA officials stated that a collaboration/teamwork performance element may not be applicable to all positions. They also said that, to the extent that this is an appropriate performance element on which an employee should be rated, the agency has or will include it in that employee’s performance plan. According to Federal Reserve Board officials, the performance plans for some occupations at the agency, such as security and administrative positions, include teamwork as a standard element. Officials also said that customized performance plans for other occupations typically include teamwork or collaboration as a competency. All 10 of the financial regulatory agencies have used competencies, which define the skills and supporting behaviors that individuals are expected to demonstrate to carry out their work effectively. High-performing organizations use competencies to examine individual contributions to organizational results. We have reported that core competencies applied organizationwide can help reinforce behaviors and actions that support the organization’s mission, goals, and values and can provide a consistent message about how employees are expected to achieve results. As previously discussed, while some of the financial regulatory agencies have included customized performance expectations specific to individuals in performance plans, we found that all of the agencies have used competencies. There are some variations in the ways in which the agencies have structured and applied competencies to evaluate employee performance. One of these variations concerns whether or not the agency has assigned different weights to competencies when determining overall summary ratings for individuals. With the exception of the Federal Reserve Board, all of the federal financial regulatory agencies have developed sets of core competencies that apply to groups of employees, and assess employee performance using those competencies as part of the annual performance appraisal process. Using competencies can help strengthen the line of sight between individual performance and organizational success by reinforcing performance expectations that support achievement of the agency’s goals, as the following examples illustrate. FCA has a different standardized performance plan for each of four occupational groups of employees—senior managers, supervisors, examiners/attorneys/analysts/other specialists (non-supervisory), and administrative/technicians. Each of the plans includes a standard set of competencies, called critical elements, which applies to all employees in that group. Specifically, the performance plan for employees in the examiners/attorneys/analysts/other specialists group contains the following competencies—technical and analytical skills; organizational and project management skills; teamwork and interpersonal skills; written and oral communication skills; and equal employment opportunity (EEO), diversity and other agency initiatives. A few sentences are included on the performance plan form to describe what each element measures in terms of the employee’s knowledge, skills, and behavior, as shown in figure 6. For the July 2005 to June 2006 performance appraisal cycle we reviewed at CFTC, all employees were assessed on a set of five competencies, called critical elements. Managerial employees were also assessed on three additional competencies having to do with leadership, developing staff, and supporting diversity and EEO programs. FDIC has 27 different performance plans with corresponding sets of competencies, called performance criteria, to cover all employees. According to agency officials, FDIC has learned from experience that having a performance management system that is based on standardized sets of competencies has allowed employees’ performance to be compared more easily to the standards from period to period. In addition, FDIC’s system bases merit pay increases for individuals at least partly on corporate contributions (defined as contributions to corporate, division, or unit-level goals). Officials said that this type of system really enhances employee line of sight and has helped employees focus on how their contributions align with the achievement of organizational goals. In their view, this type of system promotes alignment and consistency more effectively than a system of individual contracts between supervisors and their employees. NCUA has approximately 240 detailed performance plans that are tailored to specific occupations and grade levels of employees and that include competencies, which are called elements. All of the employees to whom a particular performance plan applies are assessed on the same set of elements and performance standards. Elements for some employees within the same occupation are universal, but standards can differ by grade level. For example, the performance plans for examiners in grades 7, 11, and 12 all include basically the same elements, but some of the performance standards upon which individuals are to be appraised for each element vary by grade level. The Federal Reserve Board differs from the other financial regulatory agencies in the way it uses competencies. The agency does not have sets of core competencies that apply to specified groups of employees across the agency. Instead, divisions have latitude to vary the design and implementation of the performance plan form and process. According to agency officials, divisions select competencies that best suit occupational types and the divisions’ goals, because the Board has multiple responsibilities dealing with monetary policy and financial institution regulation. It is possible for employees in the same occupational group, but in different divisions, to be rated against different sets of competencies. Agency officials said that they have not heard complaints from similar occupational groups that they may be assessed against different competencies. Further, all officers, managers, and supervisors are rated against the same four management objectives of communications, staff development, effective planning and administration of financial resources, and equal employment opportunity. A few of the agencies, such as OFHEO, FCA, and NCUA, allow differing weights to be assigned to specific competencies when determining overall summary performance ratings for individuals. Using weights enables the organization to place more emphasis on selected competencies that are deemed to be more important in assessing the overall performance of individuals in particular positions. Other agencies, including OCC, OTS, FDIC, CFTC, and FHFB, do not assign differing weights to competencies, as the following examples illustrate. At OFHEO, the rating official for each employee assigns a weight to each of the competencies (called performance elements) included in the individual’s performance plan, in consultation with the reviewing official. Each competency must have a minimum weight of at least 5, with the total weight of all the competencies in an individual performance plan equaling 100. Any competency with a weight of 20 or higher is considered to be critical. Each competency element is weighted and scored (see figure 7), and then the weighted ratings for the competencies are summed to derive the total summary rating for the individual. FCA also permits supervisors to assign different weights to competencies for individual employees, within the standardized performance plans, at the beginning of the appraisal period. No competency can be weighted less than 5 percent or more than 40 percent. At NCUA, the elements for the various occupations and grade levels have different weights assigned to them, depending on the priorities and skills pertaining to the positions. The weights are specified on the performance plan form for each position. Some of the financial regulatory agencies, including OCC, OTS, FDIC, CFTC, and FHFB, do not assign different weights to competencies when appraising employee performance. Instead, all of the competencies in an employee’s performance plan are equally considered during the appraisal. For example, at OCC, all of the competencies (which are called skill-based performance elements) that are contained in an individual’s performance plan are considered to be critical, so they receive equal weight when determining the overall summary rating for that individual, according to an official. The financial regulatory agencies have used several strategies to involve employees in their systems, including (1) soliciting or considering input from employees on developing or refining their performance management systems, (2) offering employees opportunities to participate in the performance planning and appraisal process, and (3) ensuring that employees were adequately trained on the performance management system when rolling out the system and when changes were made to the system. Overall, the 10 agencies have employed these strategies differently. Effective performance management systems depend on individuals’, their supervisors’, and management’s common understanding, support, and use of these systems to reinforce the connection between performance management and organizational results. Employee involvement improves the quality of the system by providing a front-line perspective and helping to create organizationwide understanding and ownership. All of the financial regulatory agencies, in some way, solicited or considered employee input for developing or refining their performance management systems by working with unions or employee groups to gather employee opinions or conducting employee surveys or focus groups. An important step to ensure the success of a new performance management system is to consult a wide range of stakeholders and to do so early in the process. High-performing organizations have found that actively involving employees and stakeholders, such as unions or other employee groups that represent employee views, when developing results-oriented performance management systems helps to improve employees’ confidence and belief in the fairness of the system and increase their understanding and ownership of organizational goals and objectives. Feedback obtained from these sources is also important when creating or refining competencies and performance standards used in performance plans. However, in order for employees to gain ownership of the system, employee input must receive adequate acknowledgement and consideration from management. Agencies Have Involved Employee Groups in the Performance Management System Process Unions and employee groups had some role in providing comments or input into the performance management systems at some of the financial regulators. Six of the regulators (CFTC, FDIC, NCUA, OCC, OTS, and SEC) had active union chapters, and four agencies (FCA, Federal Reserve Board, FHFB, and OFHEO) had employee groups. We have previously reported that obtaining union cooperation and support through effective labor- management relations can help achieve consensus on planned changes to a system, avoid misunderstandings, and more expeditiously resolve problems that occur. The degree to which unions and employee groups were involved in providing comments or input into the development or implementation of performance management systems varied from agency to agency. A few of the agencies with unions have to negotiate over compensation. Unions at some agencies were involved in participating in negotiations, entering into formal agreements such as contracts and memoranda of understanding, and initiating litigation concerning the development or implementation of performance management systems. At other regulators, employee groups were invited to comment on aspects of the performance management system, as the following examples illustrate. OFHEO has used ad hoc employee working groups to study different human capital issues and advise management on recommendations for changes. Specifically, OFHEO established a working group to look at teamwork and communication in the agency and the group recommended changes to the individual performance plans relevant to teamwork and communications. As a result of the group’s recommendation, OFHEO included additional language for the agency’s performance plans in the performance elements of teamwork and communication. At FDIC, the union participated with management in formal negotiations regarding the establishment of the agency’s performance management and pay for performance systems and how the systems would work. Both parties are bound by the terms of the formal agreements that resulted. At NCUA, union representatives together with management issued a memorandum of understanding in June 2006 detailing how supervisors are supposed to introduce new performance plans for specified examiner positions. The agreement set the timing of the introduction of new performance standards, required training for rating officials, required supervisors to give progress reviews to their employees on achievements to date, and required supervisors and employees to discuss the new standards. SEC will implement a new compensation and benefits system as a result of an October 2006 ruling from the Federal Service Impasses Panel (Panel). The Panel became involved when SEC and union negotiations over a compensation and benefits agreement reached an impasse. SEC management told us that they have formed a labor-management working committee to discuss how to implement the terms of the new Compensation and Benefits Agreement as provided for under the Panel ruling. Agencies Have Directly Engaged Employees in Consultations about the Performance Management System The financial regulatory agencies involved employees in different ways when developing their performance management systems. This process can involve directly engaging individual employees and collecting opinions from all employees through focus groups, surveys, or other forms of feedback to develop a successful performance management system. Further, soliciting employee input is also important when developing or revising competencies or performance elements and related performance standards in a performance management system in order to ensure that the competencies and standards reflect skills and behaviors that are relevant to employee tasks and responsibilities. While all of the financial regulators involved employees to some degree, as the following examples illustrate, NCUA did not consistently solicit input on developing or revising the competencies and standards. In 2003-2004, when the Federal Reserve Board sought to revamp its performance management system, the agency hired an outside consultant to conduct focus groups with the intent of identifying issues raised by employees and making recommendations to address any concerns. Some focus group participants said that the agency’s recommended rating distribution guidelines might prevent some employees from achieving a rating in the highest category. Furthermore, some employees were concerned about possible unfairness in ratings and wanted to see the distribution of the performance ratings for all employees published. As a result of this feedback, management began publishing the agency’s ratings distributions, and added information on the system’s process to the agency’s internal Web site on the performance management system. When developing its first performance-based pay system in 2006, CFTC solicited employee input through a variety of methods. The agency hired a contractor to conduct focus groups and to survey employees about transitioning to a performance-based pay system and the administration of a performance management system. The contractor also hosted a Webinar, a Web-based interactive seminar that allows for the submission of anonymous questions and comments, to present the results of the employee survey. Additionally, CFTC conducted town hall meetings to inform employees about development of the system. As a result of employee feedback, management decided to delay the first phase of implementation of the system from July 2006 until October 2006 in order to allow additional time for employees to learn about the system and make the transition. Union representatives at CFTC (Chicago and New York) told us that prior to CFTC’s transition to performance-based pay, the agency’s management communicated frequently with the union and provided appropriate notice prior to implementing changes. Through internal surveys, OFHEO received feedback on employee concerns regarding opportunities for promotion and the frequency of progress reviews. According to an agency official, feedback from an employee survey indicated that employees wanted more opportunities for promotion than the prior six pay-band system allowed. On the basis of this employee feedback, OFHEO made the decision to switch to 18 pay grades and created career ladders. Further, employees commented through the survey that they wanted more feedback on their performance during the year. As a result, OFHEO increased the number of progress review meetings from two to four per year. An agency official stated that the Office of Human Resources Management monitors these meetings to ensure that they have been held. SEC has analyzed data on SEC responses to OPM’s governmentwide Federal Human Capital Survey. According to agency officials, SEC has tracked employee responses to questions on, for example, how well the agency rewards good performers and deals with poor performers. In addition, SEC has created a mailbox for anonymous employee comments and constructive criticism on the performance management system. FCA circulated a draft of its proposed performance management system in 2002, and solicited comments from employees. As a result of employee comments, FCA revised the descriptions of performance elements in the performance plans, changed the weight of an element dealing with equal opportunity employment, eliminated one element, and provided additional guidance and training. To show how employee feedback was addressed, FCA management presented a briefing to employees, which listed some of the employee comments about the individual performance plans with accompanying responses from management. According to an NCUA official present at the time when the agency originally developed its performance elements and standards, NCUA conducted job analysis studies for all positions, which involved employees and supervisors in identifying specific duties, skills, and competencies needed to accomplish different jobs. In addition to the studies, she said that NCUA surveyed employees and conducted an assessment to identify any gaps in the performance elements and standards. In 2006, when NCUA revised the elements and standards for some examiner positions, NCUA used a committee consisting of managers, supervisors, and one employee to develop the new elements and standards. Union representatives told us they were briefed on the final version of the elements and standards, but were not asked for input. NCUA is currently revising individual performance plans for other positions and the process does not include provisions for soliciting and incorporating employee input. In comments on the draft of this report, NCUA officials stated that NCUA sought to solicit input from employees for certain positions, but that it was not necessary for positions that are common across the government, since NCUA usually adopts the competencies established by OPM for those positions. Some union and employee group representatives we spoke with did not think that management gave adequate consideration to employee input. For example, the Employees’ Committee at the Federal Reserve Board, which provides advice to the Management Division on a variety of issues, was asked to provide comments during the latest revision of the performance management system. According to committee members, the committee submitted a paper containing recommendations in response to this management request. The committee, however, did not receive a written response from management acknowledging their recommended changes. Committee members told us they are now hesitant to submit input during the current strategic planning process because they are concerned about the usefulness of putting time and energy into developing recommendations that may not be considered. According to agency officials, the responses from the Employees’ Committee and other employee focus groups held on this topic were summarized by the consultant hired for the project and the consultant presented the summary comments to management through the executive oversight committee. In addition, management officials stated that they met with other committee members (i.e., the heads of special interest groups) to discuss their input. The Federal Reserve Board’s Administrative Governor has also held monthly meetings with randomly selected employees as an opportunity for employees to voice their concerns about the performance management program, among other topics. All of the agencies, including FCA and FHFB, required or encouraged employee participation in developing individual performance plans or writing self assessments, contribution statements, or reports summarizing accomplishments at the end of the appraisal cycle. In high-performing organizations, employees and supervisors share the responsibility for individual performance management and both should be actively involved in identifying how individuals can contribute to organizational results and be held accountable for their contributions. By actively participating, employees are not just recipients of performance expectations and ratings, but rather, have their ideas heard and considered in work planning and assessment decisions. However, employee representatives from some agencies, such as FDIC, OTS, and OCC, expressed concern that employees were not actively involved in the performance planning and appraisal processes even when the agency required or encouraged such participation. At FCA, employees could participate in performance planning by working with their rating officials to identify accomplishments expected to be achieved during the appraisal period. In addition to participating in an official mid-year performance review, at the end of the appraisal cycle, employees and supervisors could meet for a pre-appraisal interview to discuss the employees’ accomplishments during the previous year. Additionally, employees could submit an optional self assessment of their performance. This input was supposed to be considered when the supervisor evaluated the employee, according to FCA policy. Employees at FHFB had several options for participating in developing their performance plans—working with the supervisor to develop the plan, providing the supervisor with a draft plan, or commenting on a plan prepared by the supervisor. Although FDIC, OTS, and OCC provided some opportunities for employee participation in the planning and appraisal processes, we heard from union representatives at these agencies that this participation did not always occur, as the following examples illustrate. FDIC’s performance management directive requires that the employee and the supervisor have a meeting to discuss all performance criteria included in the employee’s performance plan and any expectations regarding the quality, quantity or timeliness of work assignments. The policy also encourages the employee to submit an accomplishment report and to submit written comments on his or her supervisor’s draft assessment of the employee’s “Total Performance” before it is forwarded to higher levels of review within a pay pool. However, union representatives told us that expectation-setting meetings have not been consistently conducted; instead, sometimes employees have simply signed a form to acknowledge receipt of their performance plans. Additionally, employee comments on the appraisal form have not been taken into account by supervisors, according to union representatives. FDIC officials stated that the rating official and employee are required to meet to discuss expectations at the beginning of the rating period or whenever there is a change in performance criteria. Officials also noted that the performance management program is a collaborative process that relies on communication between a manager and his or her employees, and that the employee is supposed to seek clarification on performance criteria or expectations from the supervisor if necessary, as is explained in the directive. An employee union representative at OTS maintained that employees have not been very involved in setting their own performance expectations; instead, supervisors have informed them about what they should do at the beginning of the performance appraisal cycle. The representative told us that supervisors may discuss changing expectations with employees during the year, but these discussions have not always occurred. According to an agency official, OTS has encouraged managers to regularly meet with their employees and provide a clear picture of what is expected of employees for the year in terms of their individual roles and responsibilities for the standardized performance expectations and what will be considered in appraising the employees’ performance. Although OCC provided opportunities for employee participation in the performance planning and appraisal processes, union representatives told us that this participation did not always occur. At OCC, employees may participate in developing their individual performance plans and are supposed to submit accomplishment reports. Further, officials explained that many employees at OCC have secondary objectives in their performance plans. Because secondary objectives are customized, there should be a discussion between the supervisor and the employee. According to an official, if an employee has customized secondary objectives included in his or her individual performance plan, the employee and supervisor are supposed to have a discussion about it. However, representatives from the union at OCC told us that performance plans are pretty generic and are distributed to individuals based on their grade levels. They said that some employees do not sit with their managers to tailor the plans; instead, employees just sign the forms to acknowledge receipt of the plans. All of the financial regulatory agencies have conducted some form of training or information dissemination on topics related to performance management. Asking employees to provide feedback should not be a one- time process, but an ongoing process that occurs through the training of employees at all levels of the organization to ensure common understanding of the evaluation, implementation, and results of the systems. Providing training when changes are made to a performance management system can help ensure that employees stay connected to the system and reinforce the importance of connecting individual performance expectations to organizational goals. At some agencies, such as SEC and FHFB, training has been mainly directed at supervisors, while at FDIC training has been given to nonmanagers as well. Formal training for nonsupervisors at the agencies has typically been directed at new employees or has occurred when significant changes were being made to a performance management system. Some agencies have distributed materials through the agency intranet, memos, emails, or other written documents, as the following examples illustrate. SEC has offered several opportunities for supervisors to learn the mechanics and skills necessary for administering the performance management system. Specifically, new supervisors have received general training on supervisory roles and responsibilities, including performance management. For supervisors, SEC has offered two levels of classes on managing performance and communicating expectations. Supervisors have also had the opportunity to receive training on managing labor relations, which has included discussions of SEC’s agreement with the union, and the performance-based pay and award systems. Supervisors could also attend a briefing on performance management concepts and processes. In addition to offering supervisor training, SEC informs new employees about the performance management system during the orientation program. Performance management information is also available to employees through the agency’s intranet web site. Finally, supervisors are supposed to brief new employees on the performance management system at the beginning of the rating cycle, during discussions of individual performance standards. Most employees at FHFB have not received training on performance management since the late 1990s, and are expected to learn about the system from their supervisors. However, FHFB offered training for managers and supervisors in 2004 on the performance management system and how to conduct performance appraisals. FDIC has conducted several training sessions and disseminated information to managers and employees related to its performance management and pay for performance programs. This has included in- person training sessions, taped sessions made available for viewing on IPTV, and “question and answer” documents and policy directives available on the agency intranet. FDIC provided specific training for nonsupervisors in 2006 when management and union representatives jointly conducted training sessions on the agency’s new compensation agreement. Training was intended for non-management employees, including bargaining unit and non-bargaining unit employees, and was conducted in a variety of formats. Sessions included discussions of employees’ roles and responsibilities in the performance management and pay for performance systems. As discussed, the 10 financial regulatory agencies linked pay to performance and built safeguards into their performance management systems but could make improvements to ensure that poor performers do not receive pay increases and to improve the communication of performance standards and transparency of performance results. This section provides more detailed information on the different ways in which the agencies translated performance ratings into pay increases and used different budgeting strategies for performance-based pay. The section also discusses how the agencies awarded pay increases that considered performance but were not dependent on ratings. Finally, information is presented on agency implementation of two additional safeguards: higher- level reviews of performance rating decisions and establishing appeals processes for performance rating decisions. For increases that were linked to performance ratings, the financial regulatory agencies used different methods to translate employee performance ratings into pay increases. These methods included establishing ranges for increases, using formulas, and considering current salaries when making decisions on the amounts of performance-based pay increases for individuals. Several agencies established ranges of potential pay increases corresponding to the various performance rating levels. These systems gave managers the discretion to determine the exact pay increase amounts for individuals, within those ranges, as the following examples illustrate. At OTS, employees who received a rating of 5 (on a 5-level scale) received between a 5.5 percent and 7.5 percent pay increase, while employees who received a rating of 3 received between a 1.5 percent and 3.25 percent pay increase during the appraisal cycle we reviewed. Employees who received a rating of 1 or 2 did not receive any pay increase. OTS gave managers the flexibility to determine the specific pay amount each employee would receive within the range of possible pay increases corresponding to that performance rating. OCC established ranges of potential pay increases that corresponded to different performance rating levels and gave managers the flexibility to decide on the exact amount of pay increase that each individual would receive within the range that corresponded to that employee’s rating level. Each year OCC adopts a merit pay matrix that defines a range of allowable percentage increases that may be paid for performance rating levels 3 and 4 (the two highest rating categories). During the appraisal cycle we reviewed, individuals with a level 3 performance rating were eligible to receive a merit increase between 2.1 percent and 5.5 percent, and individuals with a level 4 rating could receive a merit increase between 5 percent and 9 percent. The rating official recommended the percentage of merit pay that each employee with a summary rating of 3 or 4 should receive. Agency officials told us that it can be challenging for managers to determine the pay increase amount for each employee within those preestablished pay increase ranges. Managers want to ensure consistency among employees with similar levels of performance and often consult with other managers or human resources staff for advice when making these pay increase decisions. Employee representatives expressed some concern about the overlapping ranges for pay increases, and a representative said that employees are unclear about what performance behaviors are needed to achieve merit increases. Other agencies used formulas for determining the amounts of pay increases linked to performance ratings to be awarded, as the following example illustrates. NCUA used a pay matrix tied to employees’ performance rating scores (which could range from 0 to 300) to calculate the pay increase percentages. All employees in the same pay pool that received the same performance rating would receive the same pay increase percentage. Specifically, an employee who received a performance rating score of 234 fell within the “fully successful” performance rating range and received a pay increase of 3.066 percent. Another employee who received a performance rating score of 235 fell within the “highly successful” performance rating range and received a pay increase of 3.076 percent. Employees who received a performance rating score below 165 fell within the “unsatisfactory” or “minimally successful” performance rating ranges and did not receive any pay increases. Some agencies considered employees’ current salaries when deciding on the amounts for pay increases linked to performance ratings, as the following example illustrates. At FCA, the percentage pay increase an employee received depended on where the employee’s current salary fell within the pay band. FCA used a merit matrix to calculate merit pay increases. The matrix considered an employee’s existing salary position within the relevant pay band (with position defined in terms of one of five possible quintiles), as well as the employee’s performance rating, and determined the percentage pay increase corresponding to those factors. For example, for the performance appraisal cycle we reviewed at FCA, the percentage increase in pay that an employee who received a fully successful performance rating could receive ranged from 3.5 percent (for an individual whose salary was in the bottom quintile of the pay band) to 2.0 percent (for an individual whose salary was in the top quintile of the pay band). For employees with the same performance rating, an employee whose salary was considered to be below market rate at the bottom of the pay band would receive a larger percentage pay increase than an employee whose salary was considered to be at or above market rate. FCA provided pay increases only to employees who performed above a minimally successful rating level. At many of the agencies, as an employee’s salary approached the top of the pay range for a position, increases linked to performance ratings could be received as a combination of permanent salary increase and a one-time, lump sum cash payment, as the following example illustrates. At FHFB, for an employee in a position with a pay range of $70,000– $90,000, if the individual’s salary was near the top of the pay range, he or she would receive a performance-based merit increase to take his or her salary to the top of the salary range and then receive a lump sum payment. Across the various methods used to translate performance ratings into pay increases, the expectation would be that larger pay increases are associated with higher performance ratings. As a means of providing a quantified descriptor of how strongly increases in ratings were associated with increases in pay linked to those ratings at each of the agencies, we computed a Spearman rank correlation coefficient between employees’ performance ratings and the percentage increases in pay that were linked to performance ratings. Although the correlation coefficients for the eight agencies varied from +0.63 and +0.94, they all demonstrated a strong positive association between higher performance ratings and higher ratings-linked pay increases (expressed as a percentage increase in salary). While the correlation coefficients provide some additional perspective on the linkage between performance ratings and pay increases at the financial regulatory agencies, they should be viewed as a rough gauge of the overall strength of the relationship across the agencies and are not sufficient for ranking or making other comparisons between agencies. In reviewing the coefficients, we noted that agencies with some of the lowest correlations were using a four-level rating system that produced rather constrained ratings distributions. In one instance, for example, employees rated at the two lowest performance levels (called levels 1 and 2) were not eligible for pay increases, and over two-thirds of all employees received a level 3 rating. Both the base pay increases and bonus amounts that could be awarded for level 3 performance overlapped with those for level 4 (the highest level), such that some employees rated at level 3 realized a percentage increase in pay that was twice the amount obtained by other level-3-rated employees, as well as even some level-4-rated employees. The federal financial agencies also varied in their strategies to budget for pay increases directly linked to performance ratings. Many of the agencies set aside funds each year for performance-based pay increases. At some agencies, these funds were treated as an agencywide funding pool or pools for performance-based pay increases, as the following examples illustrate. According to agency officials, NCUA established two agencywide merit funding pools for different employee grade-level groups because higher graded employees usually received higher ratings and consequently, higher merit pay increases. Officials stated that the establishment of two merit funding pools was more advantageous to lower graded employees and increased the amount of funds available for their merit pay. SEC established one pool of funds for performance bonuses and quality step increases available for senior officers, and another pool for all other employees. At some agencies the performance-based pay increases budget was divided into separate pay pools by suborganizational unit, and the responsibility for distributing merit pay increases was delegated to management at the subunit level, as the following examples illustrate. For the “Pay for Performance” program at FDIC that covers bargaining unit and nonbargaining unit employees, the agency established pay pools at the division level (and at the regional level for the large Division of Supervision and Consumer Protection), and allocated funds for performance-based pay increases to the pools. Funds were allocated through pay pools to each division and office, with subsequent separations of each division or office into separate populations for bargaining unit and nonbargaining unit employees. (Corporate managers and executives at FDIC are covered by a separate pay-at-risk compensation system.) FHFB provided each office with a pay pool for performance-based annual pay increases. The merit increase pool amounts were determined based on the approved governmentwide general increase plus 2.5 percent of the total base salaries for all employees in the office. An FHFB official stated that the reason each office was provided with a pool of funds was to avoid comparing individuals with different functions and responsibilities to each other, and this official believed that FHFB had greater control when pay decisions were made at the office level. For example, an office director could decide to assess all his staff at the outstanding level, but less performance-based pay would be available for each office employee. Office directors were responsible for determining the sum of all merit increases and lump sum payments for their offices, while not exceeding their offices’ merit increase pool allocations. In addition to providing ratings-based pay increases, the financial regulatory agencies awarded pay increases that considered individual performance in some way without being directly linked to employees’ performance ratings. The following are additional examples of these types of pay increases at the agencies to supplement the material presented in the body of the report. The Federal Reserve Board offered a cash awards program, which accounted for about 2.5 percent of the total agency salary budget, to reward employees who sustained exceptional performance or made significant contributions to successful projects, according to officials. According to the Federal Reserve Board’s criteria for this awards program, cash awards could be given to employees who initiated, recommended, or accomplished actions that achieved important Federal Reserve Board goals, realized significant cost reductions, or improved the productivity or quality of Board services. These awards could be made in any amount up to a maximum of 10 percent of an employee’s base pay within the same performance cycle. The 10 percent maximum did not apply to variable pay awards, which are given instead of cash awards to economists, attorneys, or Federal Reserve Board officers. For some regulators, these types of pay increases were sizeable. For example, at OCC, approximately 10 percent of employees were awarded a special increase during the completed appraisal cycle we reviewed. The awards represented a 5 percent raise for those individuals. According to OCC policy, special increases are to be awarded to recognize increased value an employee contributes to his or her job by applying desirable skills over a significant period of time or by assuming higher-level responsibilities within his or her pay band. OCC also provided some pay increases for competitive and noncompetitive promotions during the appraisal cycle we reviewed. Interestingly, of the eight financial regulators that participated in OPM’s 2006 Federal Human Capital Survey, OCC had the largest percentage of employees agreeing with the view that awards in their work units depended on how well employees performed their jobs. At OCC, 55.7 percent of employees agreed with this view. Governmentwide the corresponding figure was 39.8 percent of employees. Two other agencies, FCA and NCUA, also had slightly over 50 percent of their employees agreeing with this statement. Results from the 2006 OPM Federal Human Capital Survey suggest that the financial regulatory agencies have done relatively better than many agencies governmentwide in linking pay to performance. All eight of the financial regulators that participated in the 2006 survey had percentages of positive responses from their employees that were about the same as or better than the governmentwide percentage of 21.7 positive responses to an item asking employees whether they agreed or disagreed with the statement that pay raises depended on how well employees performed their jobs at their agencies. The percentage of employees giving a positive response to this item was at least twice as high as the governmentwide value for a majority of the eight agencies participating in the survey. While the financial regulatory agencies built safeguards into their performance management systems, the agencies established and communicated standards for differentiating among performance rating categories and criteria for performance-based pay decisions to varying degrees. The agencies also built in additional safeguards of establishing higher-level reviews of performance rating decisions by either higher-level officials or oversight groups, and all have established appeals processes for employees to request reconsiderations of performance rating decisions. It is important for agencies to have modern, effective, credible, and, as appropriate, validated performance management systems in place with adequate safeguards to ensure fairness and prevent politicization and abuse. We have reported that a common concern that employees express about any performance-based pay system is whether supervisors have the ability and willingness to assess employees’ performance fairly. Using safeguards can help to allay these concerns and build a fair and credible system. Agencies Implemented Higher-Level Reviews of Performance Rating Decisions Although they have used different approaches, all of the federal financial regulatory agencies have provided higher-level reviews of individual performance rating decisions to help ensure that performance standards were consistently and equitably applied across the agency. All of the agencies have established at least one level of review of employees’ performance ratings to help ensure that performance standards were applied appropriately. At some agencies, this oversight process has involved a second-line supervisor or higher-level official reviewing the employee’s performance rating to ensure that the rating was appropriate and consistent with any narrative describing the employee’s performance. Some agencies also have offices outside of the employee’s team/office, such as the Human Capital Office, review employee performance ratings to ensure that rating decisions for groups of employees (agencywide, or by division or region) were fair and equitable, as the following examples illustrate. OCC officials indicated that at the end of every appraisal cycle, they have evaluated the results of the performance management and pay system by looking, for example, at the differentiation in ratings and pay decisions and how the pay ranges were used. The human resources officials have discussed these results with managers to show them how their employees’ performance ratings and pay decisions influenced OCC’s overall results. For example, OCC introduced merit bonuses for the first time in the 2005 performance appraisal cycle. Upon reviewing the results of the merit bonus decisions, OCC officials found that the percentage of employees in each organizational unit that received a merit bonus varied widely among the units—ranging from a high of over 80 percent of employees receiving a bonus in one unit to 30 percent in another unit. As a result, according to agency officials, OCC decided to recommend a minimum amount for bonuses and restrict the percentage of staff who can receive a bonus to 50 percent within each organizational unit. Agency officials also indicated that they have identified areas of future training on the system based on the results of reviews and subsequent discussions with managers, in order to improve implementation of the system. At NCUA, an employee’s performance rating was completed and signed by the rating official, and then a reviewing official (an office or regional director) reviewed the employee’s performance rating to ensure that the rating was supported. Reviewers also look for consistency throughout the rating process. For example, an Associate Regional Director will look across all examiners’ ratings in the region for consistency. FHFB provided a supervisory review of performance ratings to help ensure that an employee’s recommended rating was justified as well as consistent with other ratings in the employee’s work group. Once the rating official (usually a first-line supervisor) recommended an initial summary rating, the rating official would forward the rating to a second- line supervisory reviewer (usually the division director or deputy director), called a reviewing official. According to FHFB officials, the reviewing official was usually knowledgeable about the employee’s performance and could discuss the rating narrative and final rating decision with the rating official before the rating was shared with the employee. In addition, the reviewing official checked whether performance rating narratives supported individual performance elements, summary ratings were properly calculated and appropriately signed, and there was consistency of ratings across the work group. FHFB employee representatives with whom we spoke stated a belief that the rating review process was effective and that supervisors did not give ratings unless they first reviewed their decisions with management. Employee representatives noted that there is a commitment in the agency to be fair and equitable in assigning ratings. FCA provided multiple levels of reviews of ratings to ensure the appropriateness of rating scores and consistency in applying performance standards across FCA offices. After the rating official completed an initial rating, a second-line reviewer was assigned to review each employee’s rating against the standards. Before final ratings were issued, FCA’s Office of Management Services provided a check to ensure that offices were appropriately and consistently applying performance standards and to look for any significant outliers. Employee performance assessments rated as outstanding and as less than fully successful would be reviewed to determine whether rating scores matched the narrative discussions. Any potential issues identified would be brought to the attention of the rating official for discussion and resolution. Management officials told us that the Chief Human Capital Officer would meet with division management to discuss whether the rating criteria were appropriately applied and then division managers would determine whether to change any performance ratings. In addition, the Office of Management Services performed a post-rating distribution audit to review final rating distributions to help inform future rating practices. Establish Appeals Processes for Performance Rating Decisions As mentioned previously, all of the federal financial regulatory agencies have established appeals processes for employees to request reconsiderations of performance rating decisions to help ensure accuracy and fairness in the process. Providing mechanisms for employees to dispute rating decisions when they believe decisions are unfair can help employees gain more trust in the system, as the following examples illustrate. Employees at CFTC could ask for an appeal of their overall rating through the agency’s reconsideration process. An employee could first appeal his or her rating to the manager who reviewed the rating (called the reviewing official) by defending his or her position orally or in writing. This Reviewing Official then considered the employee’s justification as well as the original rater’s opinion and provided a final decision on the matter. According to CFTC officials, employees sometimes wanted to change the wording in their performance evaluations. OTS has defined a grievance policy for employees who are dissatisfied with their performance ratings. Employees covered by the bargaining unit agreement may file a grievance under the negotiated agreement while employees not covered by the agreement may request a grievance (within 10 days of receiving their ratings) under the agency’s administrative grievance procedures. OTS’ union representative reported that in the past, management and union representatives had resolved many cases of rating disputes prior to employees filing formal grievances. The Federal Reserve Board has established an appeals process so that an employee can appeal the fairness of an overall rating decision, the rating on an individual element, or any adverse comments appearing on the performance assessment form. Employee representatives we spoke with said that they believe that employees understand the appeals process, but thought that more employees could take advantage of this opportunity. An employee may first appeal his or her performance rating to a division director, who in turn will notify the appropriate supervisor who submitted the rating. Then, the division director will determine whether the rating is appropriate based upon a review of documentation provided by both the employee and supervisor. If the employee is not satisfied with the first-level appeal decision, the employee may make a second-level appeal to the Associate Director of Human Resources and specify areas of disagreement with the performance assessment. The Associate Director for the second-level appeal will then determine whether the division has reasonably followed procedures and whether performance assessment guidelines were applied consistently to other employees reporting to the same supervisor. This supporting documentation submitted by the division will be shared with the employee, except in cases where doing so infringes on the confidentiality of other employees. As a result, first- or second- level appeal decisions may result in changes to an overall rating, changes to the rating of an individual element, or changes in the language in the employee’s performance assessment. OFHEO has established a three-level appeals process to ensure that employees can dispute rating decisions when they disagree with rating decisions. Employees can appeal the overall performance rating or individual performance elements within the rating. For the first-level appeal, the employee can submit a request with supporting documentation to the performance rating official for reconsideration. If an appeal is not resolved at the first level, the employee can request that the second-level supervisor review the performance rating and supporting documentation. Finally, the employee can request a third- level appeal by the third-level supervisor, if necessary. The federal financial regulatory agencies have made an effort to meet the comparability requirements as required by the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) and subsequent legislation. However, we found that factors such as funding constraints, when the agency was granted flexibility under Title V of the U.S. Code, the needs or preferences of their respective workforces, and each agency’s pay and benefits policies can result in some variation in their pay and benefits. They have also taken steps to explore a common survey that would enable them to more efficiently collect information for pay and benefit comparability purposes. To seek to maintain pay and benefits comparability, the majority of the 10 federal financial regulators have hired external compensation consultants to conduct individual formal pay and benefits comparability surveys that have included the other financial regulators. As shown in table 5, 7 of the 10 financial regulators conducted pay and benefits comparability surveys. Of the 7, 5 agencies also have included benefits in their formal surveys. According to agency officials, because some of the 10 agencies perceive the private sector as their main competitor for skilled employees, they have included private-sector entities in their pay and benefits surveys or have obtained additional private-sector data through the Bureau of Labor Statistics and private vendors to complement their pay and benefits surveys. The remaining three regulators (CFTC, OTS, and SEC) have participated in the pay and benefits surveys of other agencies, and officials from these agencies said that they have used the results of these surveys, but have not conducted their own. For example, an SEC official told us that his agency often uses FDIC’s data because, like SEC, FDIC has a large number of compliance examiners and must negotiate pay and benefits with the same union as SEC. In 2002 and 2003, CFTC has also hired consultants to review existing surveys from FDIC and OCC as well as from information gathered from other regulators. The agencies hired external compensation consultants to conduct the surveys because, according to officials from FCA and FDIC, these consultants provide an objective view of their agencies’ pay and benefits. And, because they have often worked with other FIRREA agencies, the consultants can provide insights and perspectives based on information from other agencies. For pay comparability surveys, external compensation consultants compare base pay ranges for a given occupation, locality pay percentages and, to a lesser extent, annual bonus and other cash award policies. To compare pay across agencies, consultants send questionnaires on behalf of the sponsoring agency and ask participating agencies to match the jobs based on the job descriptions provided. The job descriptions usually contain information on duties, scope of responsibilities, and educational requirements. External compensation consultants also have used various methods to assess the comparability of benefits. For example, the consultant for FDIC did a side-by-side comparison of benefits offered at other agencies, and also calculated the total cost of benefits per employee. In addition to conducting comparability surveys, agency officials told us that human capital officials at the 10 regulators have formed an interagency Financial Regulatory Agency Group. The members regularly consult with each other on pay and benefits issues, and as they prepare their budgets for the coming year, they meet to exchange information on potential and actual changes to pay and benefits. For example, the group has exchanged information on updates in merit pay ranges, bonuses, salary pay caps, and benefits such as flexible work schedules. Agency officials also have taken turns to update a spreadsheet that lists the pay ranges and benefits for all 10 financial regulators, a key document the agencies use to compare pay and especially benefits informally across agencies. However, in consulting with each other to meet comparability requirements, agency officials told us that because many of the financial regulators conduct comparability surveys, their staffs have had to respond to numerous and often overlapping inquiries, which can be burdensome and inefficient. This is especially the case for smaller agencies, such as FCA and FHFB, which tend to have smaller human capital (personnel) departments than larger agencies that may have pay and benefits specialists who can handle comparability issues full time, including filling out and processing various comparability surveys. According to officials from a few regulators, partly as a result of the substantial investment of time and resources, some agencies have not been timely or forthcoming in sharing their pay and benefits information. According to several agency officials, in response to renewed interest of upper management from several agencies in consolidating pay and benefits surveys, the regulators are studying the feasibility of such a method. In December 2006, the regulators formed a subcommittee within the Financial Regulatory Agency Group to study the feasibility of a common survey. Agency officials are exploring whether consolidating the various comparability surveys into a common survey will improve the process for job matching and result in more efficient use of resources. They also told us that the subcommittee also has discussed the feasibility of establishing a Web-based data system to make the most current pay and benefits information available to participating agencies. The subcommittee is working on the details of allocating costs of a common survey among the agencies, but has suggested that costs might be prorated based on the size of each regulatory agency. As of March 2007, agency officials had not yet received cost figures from potential consultants. Agency officials who attended the first subcommittee meeting told us that implementation of a common survey would require collaboration and agreement on a number of matters, such as choice of external compensation consultant to conduct the common survey, since different consultants have different approaches to carry out the common survey; group of jobs to be benchmarked for the common survey and best approach for job matching, as some jobs are unique to certain agencies; timing and frequency of the common survey to meet everyone’s needs since agencies determine pay and benefits at different times of the year and would need the updated information when the need arises; number and types of organizations to include in the common survey because while all agencies would want to include the financial regulators, some may need information from certain private-sector entities; and, cost of the common survey may be substantial, which according to some agencies, is a potential concern. By forming the subcommittee to explore issues associated with developing a common survey, agency officials have adopted some of the practices that we identified that would enhance and sustain collaborative efforts. These practices include defining and articulating a common outcome, establishing means to operate across agency boundaries, and leveraging resources. Agency officials who are members of the subcommittee told us that the officials have sent a formal request for information to several consultant candidates. The request inquired about the consultants’ ability to plan and execute a common survey that will provide customizable reports for each agency and also create a secure, centralized data source on pay and benefits. In addition, agency officials asked how the consultants would approach job matching, a complicated task. For example, officials from FDIC, OFHEO, and SEC told us that the use of different pay plans and grades among agencies and the location of field offices in cities with different employment market conditions contributed to the difficulty in matching jobs across regulators. In addition, some agency officials said that it is difficult to match jobs because agencies have different job requirements that may differ even when a job title is the same. The subcommittee received responses from various consultants and as of March 2007 was in the process of contacting the consultants to gather more details and to discuss the options available to them. In the absence of a legislative definition, agency officials told us that agencies have used various benchmarks, as shown in table 6, to assess pay and benefits comparability. For example, FDIC has sought to set its total pay ranges (base pay plus locality pay) for specific occupations and grade levels within 10 percent of the average of FIRREA agencies, a benchmark that pay and benefits consultants have used in their comparability surveys. FCA uses benchmarks, including average market rate paid by other financial regulators. CFTC uses average payroll and salary structure relative to other regulators. FHFB, NCUA, OTS, and SEC told us that they have not used specific benchmarks, and OTS uses informal benchmarks as needed. Agency officials told us that all agencies, including the three agencies that have not conducted formal benefits surveys, have assessed their benefits comparability by comparing individual benefit items as well as agency contributions to specific benefits. They added that most agencies have used the interagency group spreadsheet that lists all the benefits and agency contributions offered. According to agency officials, the financial regulators have used information from the pay and benefits comparability surveys and discussions among the agencies in their efforts to seek to maintain comparability. Table 7 provides some recent examples of these efforts. Although the financial regulators have adjusted their pay and benefits to seek to maintain comparability, several factors influence compensation decisions that lead to some variations in pay ranges and benefit packages. As shown in figure 3 in the report, with the exception of the Federal Reserve Board and OFHEO, the financial regulators’ total pay ranges consist of base pay and locality pay percentages that are calculated based on the employees’ duty station. The Federal Reserve Board and OFHEO do not have separate locality pay percentages because Washington, D.C., is their only duty station. Figure 3 also shows that, for examiners, FDIC and NCUA pay ranges generally have lower minimum base pay than other agencies, and FDIC and OCC have higher maximum base pay for examiners. In addition, for economists, CFTC and FDIC pay ranges have lower minimum base pay than other agencies, and the CFTC and OCC pay ranges have higher maximum base pay. Actual average base pay figures that we obtained from the Central Personnel Data File and from the Federal Reserve Board also vary among the 10 agencies in relation to the agencies’ respective base pay ranges, as shown in figure 3 in the report. For example, the actual average base pay for examiners at OCC ($92,371) is 52 percent of the maximum pay range of $177,600. However, actual average base pay as a percentage of maximum pay can vary considerably, as in the case of SEC attorneys. Their actual average base pay ($124,379) is 98 percent of the maximum pay range of $126,987. According to agency officials, two factors affect where actual average base pay falls within an agency’s pay range. One is the distribution of the length of service among employees. For example, the actual average base pay for agencies with a higher proportion of long-tenured employees would be closer to the maximum of its pay range. Conversely, actual average base pay for agencies with a higher proportion of new hires would fall closer to the minimum of the pay scale. An OCC official told us that despite the fact that OCC also has a large number of experienced examiners, the actual average pay for OCC examiners may seem low compared to other agencies because OCC has hired a large number of examiners during the last 2 years. Officials from several federal regulators also told us that they rarely hire at the lower grade level for some occupations. For example, FHFB tends to hire mid-level employees because its relatively small office cannot afford a long training period for new hires. As shown in table 2 in the report, locality pay percentages vary among agencies for the same duty station. Table 8 shows the methods that agencies are currently using to determine their respective locality pay percentages and adjustments. The benefits that the 10 financial regulators offered also varied. Although all of the agencies offer standard federal government benefits, there are variations in the extent of agency contributions and types of additional benefits these agencies offer. For example, all financial regulators offer the Federal Employees Health Benefits program, but agency contributions differ. Some agencies pay for a percentage of the health premium (e.g. 70 percent at FCA and 90 percent at OFHEO). CFTC contributes 100 percent for reservists called to active duty. The following are selected examples of the additional benefits that some financial regulators offer as of September 2006 unless noted otherwise: Five of the 10 regulators— FDIC, FHFB, the Federal Reserve Board, OCC, and OTS—offer their employees 401(k) retirement savings plans with varying employer contributions. In addition, all agencies except the Federal Reserve Board offer the federal Thrift Savings Plan. The Federal Reserve Board and OCC offer domestic partner benefits for some types of plans. FCA and SEC offer child care subsidies, and FDIC and OTS offer on-site day care. FCA, FDIC, FHFB, and OCC reimburse employee expenses related to items such as fitness, recreation, and adoption in their wellness accounts. Amounts differ from $250 per year at FDIC to $700 per year at FHFB. According to agency officials, a number of factors have influenced their pay and benefits policies and could have contributed to the variations in their pay ranges and benefits. For example, the length of time an agency has been under the comparability requirements and related compensation flexibility provisions affected compensation. CFTC and SEC officials told us that because their agencies received pay and benefits flexibilities and were put under a comparability requirement much later (in 2002) than the six FIRREA agencies (in 1989), CFTC and SEC have taken an incremental approach to slowly increase their pay and benefits to close the gap with the other financial regulators. According to a CFTC official, this would allow time for employee input and acceptance while building agency capacity to manage the authority. Budgetary constraints represent another factor. OFHEO officials told us that OFHEO did not implement a new 401(k) retirement savings plan recommended by its external compensation consultant, Watson Wyatt, in its 2005 comparability survey because OFHEO is working to control the growth of its personnel expenses and because budget limitations resulting from being part of the appropriations process has caused OFHEO to curtail new benefits programs. Furthermore, agency officials said that an agency has to consider the particular needs and preferences of the agency’s workforce as well as ways to attract and retain its workforce. For example, CFTC added a fully paid dental benefit as a result of an online vote by employees on preferred benefits options. FDIC officials indicated that its employees greatly value the matching contribution FDIC provides on its 401(k) plan, and found that the matching contribution is also an effective retention tool. Similarly, OCC added a 401(k) retirement savings plan in order to attract and retain employees. According to an SEC official, SEC uses a student loan repayment benefit because the benefit helps to attract and retain employees, many of whom are recent law school graduates. Agency officials emphasized that it was not their goal to have identical pay and benefits packages; rather, they considered pay and benefits as a total package when seeking to maintain pay and benefits comparability and when setting pay and benefits policies aimed at recruiting and retaining employees. See table 9 for more detailed information on the benefits that the 10 financial regulators offer. The following table lists selected benefits identified by 10 financial regulators as of September 2006, unless otherwise noted in the table. We included the following categories of benefits: insurance, pre-tax benefits, child care, leave, travel and relocation, educational and professional expenses, retirement, work/life benefits, and other benefits and payments. We reviewed the movement of financial regulator employees from fiscal year 1990 through 2006 using data from the Central Personnel Data File (CPDF). We found that the movement of employees among the financial regulators was very low and presented no discernible trend, but that 86 percent (13,433 of the 15,627) of employees leaving the regulators voluntarily (i.e., moving or resigning) resigned from the federal government. Our analysis did not include the Federal Reserve Board of Governors because CPDF does not contain data from the Federal Reserve Board. (For more detail on our methodology, see app. I.) This appendix includes additional data for fiscal years 1990 through 2006 on the average number of these employees moving among the 9 financial regulators; the movement of employees among 9 of the 10 financial regulatory agencies by occupation; and employment by occupation and employee movement agency snapshots. Figure 8 shows the average number of employees in mission-critical and other occupations moving among the 9 financial regulators for which we have data from fiscal year 1990 through fiscal year 2006. On average, a total of 919 employees per year moved or resigned. Movement ranged from an average of less than 1 for investigators to an average of over 11 for examiners. Table 10 provides the actual number of financial regulator employees for whom we had data, by mission-critical and other occupations, who moved to another financial regulator from fiscal year 1990 through 2006. Tables 11 through 19 provide employment by occupation and movement data for 9 of the 10 agencies from fiscal year 1990 through 2006. In addition to the contacts named above, Belva Martin and Karen Tremba, Assistant Directors; Thomas Beall; Amy Friedlander; Robert Goldenkoff; Eugene Gray; Simin Ho; Anne Inserra; Janice Latimer; Donna Miller; Marc Molino; Jennifer Neer; Barbara Roesmann; Lou Smith; Tonya Walton; Lindsay Welter; Gregory Wilmoth; and Robert Yetvin made major contributions.
Congress granted financial regulators flexibility to establish their own compensation systems and required certain agencies to seek to maintain comparability with each other in pay and benefits to help the agencies overcome impediments to recruiting and retaining employees and avoid competing for the same employees. In response to a request, this report reviews (1) how the performance-based pay systems of 10 financial regulators are aligned with six key practices for effective performance management systems, (2) the actions these agencies have taken to assess and implement comparability in pay and benefits, and (3) the extent to which employees in selected occupations have moved between or left any of the agencies. GAO analyzed agency guidance and policies, agency data on performance ratings and pay increases, agency pay and benefits surveys, data from the Central Personnel Data File, and interviewed agency officials. The 10 federal financial regulatory agencies have generally implemented key practices for effective performance management but could improve implementation of certain practices as they continue to refine their systems. All of the financial regulators awarded some pay increases during the appraisal cycles we reviewed that were linked to employees' performance ratings, although two also provided across-the-board pay adjustments, even to employees who had not received acceptable performance ratings, weakening the linkage of pay to performance. Both agencies have indicated in the future annual pay adjustments will not be awarded to unsuccessful performers. The agencies have generally aligned individual performance expectations and organizational goals, connected performance expectations to crosscutting goals, used competencies to provide a fuller assessment of performance, and involved employees and stakeholders in the process. All of the agencies built safeguards into their performance management systems to enhance credibility and fairness. However, the extent to which the agencies communicated overall results of performance rating and pay increase decisions to all employees varied, and some could increase transparency by letting employees know where they stand relative to their peers in the organization, while protecting individual confidentiality. Financial regulators have hired external compensation consultants to conduct pay and benefits comparability surveys, exchanged pay and benefits information, explored the feasibility of conducting a common survey, and adjusted pay and benefits to seek to maintain comparability with each other. Although financial regulators have adjusted pay and benefits partly based on the results of their comparability efforts, there is some variation in pay ranges and benefit packages among the agencies. According to agency officials, factors such as the year the agencies first became subject to comparability provisions, budget constraints, and the needs and preferences of workforces play a role in compensation decisions and contribute to this variation. Furthermore, agency officials emphasized that it was not their goal to have identical pay and benefits packages; rather, they considered pay and benefits as a total package when seeking to maintain comparability and when setting pay policies aimed at recruiting and retaining employees. Between fiscal years 1990 and 2006, few employees moved among financial regulators and the movement among these agencies presented no discernible trend. Specifically, 86 percent (13,433) of the 15,627 employees that left during this period (i.e., moving or resigning but not retiring), resigned from federal employment. Annually, the percentage of employees who moved to another financial regulator ranged from a low of 1 percent in fiscal year 1997 (16 out of the 1,362 who moved or resigned) to a high of 8 percent in fiscal year 1991 (97 out of the 1,229 who moved or resigned). The total number of financial regulatory employees was 15,400 and 19,796 during those 2 years, respectively.
If ATATs are highly technical and organized or marketed, they are often referred to as abusive tax shelters. According to IRS, abusive tax shelters result in unlawful tax evasion. Our report on business network-based tax evasion illustrates how one type of evasive transaction—the installment sale bogus optional basis—operated. ATATs also include abusive transactions that are considered scams or schemes based on the erroneous application of tax law or clearly frivolous arguments. Tax shelters can be legitimate to the extent that they take advantage of various provisions in the tax code to lawfully avoid tax. For instance, retirement plans (e.g., 401(k)) shelter income by not subjecting certain wages to federal income taxes until the wages are distributed from the plan. Tax shelters can feature such techniques as taxpayers trying to avoid gains altogether or to convert ordinary income into capital gains to take advantage of lower tax rates on capital gains. A difficulty arises when tax shelters are designed to confer a tax benefit that the Congress did not intend. An example of this type of shelter is the lease-in, lease-out (LILO) shelter that involved complex purported leasing arrangements in which corporations supposedly leased large assets, such as sewer systems, from owners without a tax liability and immediately leased them back to their original owners in an attempt to delay income recognition for tax purposes for many years. ATATs have been a long-standing problem that the Congress, Treasury, and IRS have used different methods to address. For example, the Tax Reform Act of 1986 addressed tax shelters from the 1970s and 1980s by preventing individual taxpayers from using “passive activity” losses from tax shelter investments to reduce taxes by offsetting taxable income. Interest in abusive tax shelters picked up again in the 1990s. In 1999, a Department of the Treasury report described a large and growing problem with abusive corporate tax shelters. In 2002, citing many ongoing efforts, Treasury published a plan to further combat ATATs, featuring both legislative proposals and administrative actions. In 2004, the AJCA provided updated disclosure and list-maintenance rules and updated penalty provisions. The list-maintenance rules require that material advisors keep lists of their investors and make the lists available to the Secretary of the Treasury within 20 business days of a request. For a summary of selected provisions of the AJCA related to ATATs, see appendix II. Over time, Treasury’s strategy for addressing tax shelters centered on rules that were intended to reinforce each other. The rules attempted to do this by requiring taxpayers entering into certain transactions and tax advisors recommending the transactions to disclose to IRS information about the same transactions. The idea was that using these rules, IRS could follow a transaction from a taxpayer to the taxpayer’s advisor and from the advisor to any of the advisor’s clients. The rules require specified taxpayers to disclose “reportable transactions.” These transactions include “listed transactions” that are the same or substantially similar to one of the types of transactions that IRS has determined to be a tax avoidance transaction and identified by notice, regulation or other published guidance. Reportable transactions also include “non-listed” transactions, which are not designated as tax avoidance transactions but prompt tax avoidance or evasion concerns nonetheless. Non-listed reportable transactions include certain transactions (1) offered to a taxpayer under conditions of confidentiality and for which the taxpayer paid an advisor a minimum fee and (2) certain loss transactions. Non-reportable abusive transactions are abusive transactions not described in one of the reportable categories. For a comparison of requirements for reportable and non-reportable transactions and a description of how taxpayers, material advisors and other promoters, and IRS interact with each other, see figure 1. IRS has had various forms for filers reporting ATAT information. For example, taxpayers are to file Form 8886, Reportable Transaction Disclosure Statement, to disclose their reportable transactions. Form 8918, Material Advisor Disclosure Statement, is to be filed by material advisors. This form was created in 2007 to replace Form 8264, Application for Registration of a Tax Shelter, which was to be filed by tax shelter organizers in order to describe a transaction and its tax benefits when the transaction had certain potentially abusive characteristics. To enforce compliance, IRS has three interlocking efforts: promoter investigations, investor examinations, and settlements. Figure 2 focuses on two IRS operating divisions—LB&I and SB/SE—that develop and evaluate promoter leads for investigation and shows how each division coordinates with others, including the Servicewide Abusive Transaction Executive Steering Committee. To make a case against abusive promoters, LB&I or SB/SE may examine the tax returns of taxpayers investing in the promotions. If they make such a case, the promoters will be unable to sell their ATATs to taxpayers, and IRS will thus have fewer taxpayers to examine to see if their investments in those promotions cause tax concerns. IRS may also settle with groups of taxpayers without necessarily having to first locate and examine each taxpayer who used a promotion. IRS induces these taxpayers to come forward in disputed matters by, in some cases, reducing their penalties in exchange for conceding tax benefits that they claimed. IRS has limited trend data on the size of the ATAT problem in terms of the number of abusive promoters and taxpayers investing in the promotions. Estimating the extent of ATATs is at best an inexact process because ATATs are often hidden. Data do not exist to measure any ATATs unknown to IRS with much precision. Given these difficulties, IRS used various qualitative and quantitative methods in an attempt to develop some estimates in a 2006 study. IRS estimated that about 1 million tax returns and between about 11,000 and 15,000 promoters were involved in ATATs in 2004. Of the 1 million returns, IRS estimated that more than half related to “business and deduction” schemes and almost a third involved “frivolous filer/anti-tax” schemes. IRS put the rest of the returns into six other categories, such as corporate tax shelters. IRS had no plans to update these estimates. In the absence of data on trends in the use of ATATs, we interviewed a number of tax experts in IRS or who were former top officials of IRS or others well-known in the tax community. The experts we interviewed told us that abusive tax avoidance is still a major issue but the nature of ATATs has changed. A theme we heard from the experts is that the mass marketing of ATATs has declined in recent years, although the experts had different views on the extent of the decline. Mass marketing refers to the sale of advice by promoters such as larger accounting and tax law firms about how to structure ATATs. This advice was sold to clients such as wealthy individuals and corporations. One expert said that mass marketing of ATATs has significantly declined in recent years. Others made statements like the battle has been “more won than not.” Although mass marketing of ATATs has declined, these experts said that ATATs have become more sophisticated and international in scope. In addition to international transactions, ATATs are changing as false credits and deductions, customized shelters, and return preparer fraud entities have come more to the fore. IRS’s “dirty dozen” list, its annual listing of “notorious tax scams,” ranks certain abuses that are relevant to ATATs— such as return preparer fraud and trying to hide income offshore—at the top. The experts we interviewed gave us details about how ATATs involving international features or tax return preparers changed. For instance, one expert believed abuse took the form of improperly keeping income offshore and not reporting it on a tax return. IRS officials said that abusive transactions moved from being domestic transactions mass marketed by large accounting and law firms to offshore transactions promoted by smaller entities and more customized to the buyers. IRS officials also said that ATATs seemed more international than before, with promoters changing the countries and mechanics of their promotions. In terms of tax return preparers, IRS officials told us of promotions systematically using false or inflated deductions or credits in tax returns. These schemes achieved broad coverage by taking small scale abusive positions with individual clients. For instance, preparers solicited clients in an attempt to improperly claim the First-Time Homebuyer Credit, which first came into existence in 2008. Experts also told us that the nature of the ATAT problem is cyclical and ever-changing and warrants continuous IRS vigilance. According to IRS officials, IRS tries to proactively identify and thwart emerging ATATs, especially early in their life cycles, pointing to early IRS identification of taxpayers’ attempts to improperly claim the First-Time Homebuyer Credit. Three data sources other than the above IRS estimates also give some indication of changes in ATAT activity. First, taxpayers are disclosing fewer listed reportable transactions (which are designated as tax avoidance transactions) to IRS on Forms 8886. Taxpayers disclosed about 6,100 of these transactions in 2007 and about 1,300 each in 2008 and 2009. However, IRS cannot know how many listed transactions should have been disclosed but were not. Second, the cumulative number of transaction types that IRS has “listed” since 2000 has leveled off at 36. As figure 3 shows, IRS did not designate any new listed transactions in years 2008 through 2010. IRS officials said they detected fewer widely promoted avoidance transactions suitable for listing in recent years. However, the number of transaction types that are listed is not an indication of how many promoters or taxpayers are using them. Third, IRS identified two transactions of interest (TOI) in both 2007 and 2008 but none in 2009 or 2010. IRS designates a new promotion that has the potential for tax avoidance or evasion as a TOI when IRS lacks information to decide whether to list it as a tax avoidance transaction. Doing so triggers a requirement for the taxpayers involved to disclose information about the transaction to IRS on Form 8886. IRS has investigated promoters in an effort to stop ATATs, examined the tax returns of taxpayers participating in ATATs, and initiated settlements with groups of taxpayers without necessarily having to first locate and examine each taxpayer using the ATAT. IRS efforts in these three areas are consistent with what we heard about how the ever-changing problems with ATATs merit continued vigilance. However, IRS has difficulty quantifying the IRS-wide impact of these efforts on the ATAT problem. As context for discussing such IRS-wide impacts, several examination and settlement initiative projects considered by the Servicewide Abusive Transaction Executive Steering Committee showed the challenges of working across IRS units. An internal IRS report noted that decisions by different divisions on how and when to report results on their work for one IRS-wide settlement initiative initially resulted in inconsistent briefings to the Enforcement Committee, to which the Steering Committee reports. Another IRS team working on an ATAT issue informed the Steering Committee about obstacles to coordinating among IRS units and about needed mitigations. As our business networks report indicated, competing examination efforts or plans across divisions made prioritization difficult. For fiscal years 2006 through 2010, about 100 SB/SE promoter investigations annually resulted in injunctions for promoters to stop what they were doing and/or penalties for what they did, as table 1 shows. The 561 investigations over the 5 years resulting in injunctions or penalties were 38 percent of all investigations closed. For the same years, SB/SE closed (e.g., discontinued for various reasons) 905 investigations (62 percent) without penalties or injunctions. This annual level of SB/SE investigations shows IRS’s vigilance in attempting to identify and pursue leads to address ATATs. According to Lead Development Center (LDC) officials and documents, LDC develops leads received from such sources as IRS revenue agents and officers and practicing accountants who suggest an individual may be involved in an abusive promotion. IRS field offices decide whether to pursue an investigation. If an investigation cannot sustain a penalty or injunction, it can be surveyed (closed without promoter contact) or discontinued (closed after promoter contact). LDC officials said that the reasons for surveyed or discontinued investigations include the difficulty in proving abuse, the need to balance limited resources and many priorities in addressing the most egregious promoters, and the lack of harm to the government. IRS had incomplete data on why investigations were discontinued or surveyed. In fiscal year 2009, SB/SE discontinued 84 investigations, surveyed 46, and closed 1 because the promoter died. Of the 130 cases surveyed or discontinued, we could not analyze 30 because LDC officials said they did not receive the documentation and 3 because the documentation was incomplete. In over 65 percent of the other 97 cases we could analyze, the investigations closed because the parties were not actively promoting abusive transactions or because IRS could not obtain enough evidence to support a penalty or injunction. In February 2011, LDC started using codes to capture the reasons for surveying or discontinuing promoter investigations to have more complete data on these reasons. SB/SE officials told us they plan to promote consistency in the use of the reason codes by asking field offices to describe why they selected the codes for each case and by continually analyzing the different codes used. Because this process had just started, we had no assurance how these plans would work or how the reason-code data would be used to make decisions on the types of investigations to start. IRS had no criteria to indicate whether the SB/SE investigation results in table 1 were at desired levels. While the effectiveness of injunctions is apparent when they stop promotions, tax experts questioned the effectiveness of penalties if they do not deter those who will risk a penalty to engage in an abusive promotion. Without criteria, IRS could not say whether having 62 percent of investigations closed without penalties or injunctions was too many, too few, or about right, which would be important information in deciding which types of cases to select for investigation. Regardless, IRS officials said that closing 62 percent of investigations without penalty or injunction did not indicate any flaws. They said that decisions about doing an investigation usually cannot be made without some field work; decisions about continuing an investigation with additional field work must be balanced with the available field resources. These officials said they continually look for ways to develop and refine leads before turning them over to investigators because successful investigations of promoters drive unscrupulous individuals out of business. The impact of examinations on the ATAT problem is uncertain. Examinations of the returns from taxpayers involved in suspected ATATs recommended billions of dollars in additional tax assessments from fiscal years 2006 through 2010. IRS did not track how much of these recommended amounts came from the ATATs versus other tax issues. Further, the recommended amounts may not produce actual tax assessments or collections when taxpayers dispute the recommended amounts in the appellate or litigation processes. For examinations closed in fiscal years 2006 through 2010: LB&I examinations of 9,400 tax returns with suspected ATATs recommended additional assessments for all tax issues of $42.4 billion, of which taxpayers disagreed with about 84 percent. SB/SE examinations of 125,700 returns with suspected ATATs recommended additional assessments for all issues of $6 billion, of which taxpayers contested at least 54 percent in IRS’s Appeals office. Neither LB&I nor SB/SE readily tracked how much of the additional taxes were ultimately assessed and collected after examinations for either ATAT or all tax issues. IRS officials told us that they started tracking amounts collected from examinations that included ATAT issues on a monthly basis during fiscal year 2011, but the tracking does not isolate the amounts coming from the ATAT (as opposed to other) issues. For examinations that included taxpayer disclosures of reportable transactions filed with OTSA, OTSA did not have a comprehensive view of the results of examinations done by LB&I and SB/SE. After OTSA sends the disclosures to LB&I and SB/SE for possible examination, OTSA relies on the two IRS divisions to report back on the results. However, each division reported results differently. For LB&I examinations, the examiners were the source on the examination results. OTSA officials said the examiners did not report back on all results in a consistent manner because they were not required to do so. For SB/SE examinations of the disclosures, SB/SE officials said they collected the results for OTSA from data systems and not from examiners. In general, SB/SE and LB&I officials said that divisions track information differently because of different needs. For example, SB/SE relies more on electronically capturing examination results because it does more examinations of shorter duration compared to LB&I. According to SB/SE officials, SB/SE was unable to provide OTSA with data in time for the May 2010 annual report to the Joint Committee on Taxation due to its larger number of examinations. These SB/SE officials also said that they did not review the accuracy of the SB/SE data used in the annual reports. Without comprehensive or consistent results on examinations of ATAT disclosures for the report to the Joint Committee, IRS cannot be certain it is providing reliable information to the Congress. Nor will IRS executives have the best information available for making decisions about the number of examinations to do and for evaluating their impacts. In various ATAT settlement initiatives, IRS provided inducements for taxpayers to come forward to IRS to resolve disputed matters. The inducements sometimes took the form of reducing taxpayers’ penalties in exchange for taxpayers conceding tax benefits that they claimed. IRS reported to the Joint Committee on Taxation that it had collected billions of dollars from taxes, penalties, and interest from the beginning of its 17 ATAT settlement initiatives through early 2010. These dollar figures should not be considered the final word in describing the 17 initiatives’ results through early 2010. On one hand, they did not include the results of field work and litigation still occurring at the time of that report. On the other hand, initiative results included some collections from taxpayers who did not participate in the settlement but whose tax returns had been examined because they were related to the relevant transaction. Also, according to IRS officials, initiative results sometimes included issues not targeted by the initiative. As noted earlier, IRS’s enforcement tracking systems only track ATAT results by case and not by separate tax issues within a case. Furthermore, the dollar amounts collected for the 17 ATAT initiatives were not reported consistently to the Joint Committee on Taxation. For instance, the dollar amount for the Global Settlement Initiative, which aimed to resolve 21 unrelated abusive transaction issues under one framework, was the amount of additional tax recommended, not the amount collected. Also, the total for all of the ATAT initiatives as reported to the Joint Committee did not include any dollars collected from the very large-dollar LILO and similar Sale-In/Lease-Out (SILO) ATAT initiatives. The responsible IRS group did not provide data on the LILO initiative and provided data on adjustments to taxable income, rather than the amount collected, on the SILO initiative. Lacking data on how much additional tax ultimately was collected limits information on the impact of the settlement initiative. Better tracking of dollar collections could be considered for future initiatives. However, IRS has not seen the need to start new ATAT initiatives, which could be consistent with experts’ view that the extent of the ATAT problem has eased. Further, isolating the impact on ATATs of settlement initiatives from the impacts of examinations and promoter investigations is difficult to do, especially when IRS does not have an IRS- wide system for tracking and comparing the results from its enforcement efforts. The AJCA provided new tools to address ATATs. For material advisors, it revised the requirements to disclose reportable transactions and provide lists of their investors to IRS upon request or face penalties. For taxpayers, the AJCA established requirements to disclose reportable transactions or be subject to enhanced penalties. OTSA received thousands of Forms 8886 from taxpayers to disclose reportable transactions for 2007 through 2009, as table 2 shows. Almost all of these disclosures were associated with loss transactions—most of the losses had not been deemed by IRS to be tax avoidance. OTSA officials said that the number of disclosures dropped in 2008 because IRS combined multiple disclosures from one taxpayer into one disclosure, and increased in 2009 because economic conditions generated more losses that were disclosed as reportable transactions. If taxpayers do not file all required Forms 8886 or file incomplete or inaccurate forms, IRS would lack the information that it needs to make decisions on whether to examine the appropriateness of the transactions being disclosed by taxpayers. Without this transparency, abusive transactions are more likely to stay hidden from IRS. According to OTSA officials, OTSA did not confirm that it always received its copy of the required Form 8886 from taxpayers disclosing a reportable transaction. Taxpayers must file one copy of the form with their tax return and send a second copy directly to OTSA for their initial year of participation. Absent a system to confirm that OTSA always received its copy, IRS cannot know how prevalent this problem might be. However, IRS knows that a problem exists because, according to IRS officials, IRS examiners of tax returns have identified some taxpayers who filed their Form 8886 with their tax return but failed to send it to OTSA. If OTSA does not receive disclosures, it cannot identify transactions that merit examination for appropriateness as well as possible penalties. For individual tax returns filed on paper, IRS had no return processing indicator that would specify when a Form 8886 was received with all types of returns. IRS had an indicator for corporate, partnership, estate and trust, and tax-exempt returns but did not update that indicator to cover all types of returns when it created the Form 8886; extensive computer programming would have been required. For electronically-filed tax returns with Form 8886 disclosures, OTSA officials said that they did not use existing IRS data to verify if they received copies of the forms. In 2008, OTSA investigated the viability of doing a match to verify if it received its copies. OTSA had data problems and had not made the match a high priority. OTSA officials said it will not do any match until it also covers paper-filed returns to adhere to IRS’s policy on treating paper- and electronically-filed returns equally for purposes of verification. Recognizing this policy, OTSA officials said that IRS was establishing a new indicator for paper and electronic tax returns to identify each Form 8886 filed. OTSA officials said that the new indicator, if it works as intended, would allow them to identify paper and electronic Form 8886 disclosures that OTSA has not received. OTSA officials said that the new indicator would not be operational until September 2012 for use with 2011 tax returns filed in 2012. Given that checking compliance in filing required Forms 8886 would be facilitated by electronically-filed tax returns, OTSA does annual studies on whether it should pursue authority from the Congress for mandatory electronic filing for all taxpayers who file Form 8886. Studies done in 2008, 2009, and 2010 indicated that mandatory electronic filing would make processing the Form 8886 less time- and labor-intensive and more accurate. However, all three studies concluded that mandating electronic filing was not currently viable or realistic, mainly because, according to the study reports, the great majority of taxpayers filing the Form 8886 did not already file their tax returns electronically and would have to change their filing format. For example, for all Form 8886 filers in processing year 2009, 14 percent filed their tax return electronically, including 43 percent of corporate filers, 32 percent of partnership filers, 11 percent of individual filers, and 3 percent of estate and trust filers. However, these OTSA studies did not examine whether the returns were already being prepared on computers. If they were, taxpayers could more readily comply with an electronic filing mandate. IRS data compiled from codes collected on tax returns show that about two thirds of all paper returns in 2008 (and about 92 percent when a paid preparer was used) were prepared on a computer, printed, and mailed to IRS. As another indication of the feasibility of requiring electronic filing, the median adjusted gross income for individual Form 8886 filers in 2006 was about $1.4 million—an income level that would likely be able to afford a computer or a paid preparer in filing tax returns. Instead of pursuing mandatory electronic filing, IRS planned to begin using barcode technology in early 2011. IRS’s plan assumes that taxpayers can use computers to download and complete the Form 8886 from the irs.gov Web site. A barcode on the Form 8886 will be updated automatically from specific fields on the form and then printed on the paper return. IRS can then scan the barcode without verifying the information. Even in this case, taxpayers would still need to send a paper copy of their Form 8886 directly to OTSA. Material advisors may not have filed all of their Forms 8918 or 8264 with OTSA, as required. By analyzing IRS Statistics of Income (SOI) samples of 2007 partnership and S corporation tax returns, we found 668 entities that reported that they filed or were required to file a material advisor form on a reportable transaction. When we matched the Employer Identification Numbers of these entities against the identifying numbers that appeared on the Forms 8918 and 8264 in OTSA’s material advisor database, only about 5 percent of the 668 entities appeared in the database. For 2007, OTSA believed many partnership and S corporation filers likely confused the citation specified for material advisors (Internal Revenue Code (IRC) section 6111) with section 6011, which deals with the investor disclosure on Form 8886. As the section numbers are similar and the partnership and S corporation forms did not specifically ask if the taxpayer filed a Form 8918 or was a material advisor, OTSA believed that the filers would incorrectly answer the section 6111 question on the return, thinking that they were affirming they had a section 6011 obligation. In 2008 and 2010, IRS revised the relevant question on the partnership and S corporation forms, respectively, specifically mentioning the Form 8918 and material advisor disclosure. OTSA believes this change will correct the mismatches we found. It intends to match the material advisor database against SOI data for 2008 partnership forms to determine if the disparity persists in a year after the question revision. However, matching S corporation data would have to await the availability of SOI information for 2010. IRS received more than 325,000 Forms 8886 for 2006 through 2009. IRS reviewed about 10 percent for completeness, which meant that IRS could understand the transaction and its tax benefits and identify the parties involved. After the review, IRS sent 177 letters to taxpayers on apparently incomplete disclosures, asking for the missing information to be submitted. IRS later determined that 111 (63 percent) of the taxpayers responding did not have either a disclosure requirement or a completeness issue. For various reasons, IRS did not resolve whether all of the other 66 taxpayers had disclosure requirements or complete disclosures. In January 2011, OTSA officials said they were contemplating a new process for reviewing all disclosures for completeness based on a 10- question checklist. The completed checklist is to be used to determine whether the disclosure is incomplete and what action to take. OTSA officials said that final decisions on the details of the process have not been made and that the process would not be established until the 2012 filing season at the earliest. Afterward, its success would not be analyzed until after two years of reviews had occurred according to these officials. As a result, IRS will not know for at least two years whether the new process will overcome the previous problems in deciding if disclosures were really incomplete and in following up with the taxpayers. Without an adequate review process in place, IRS risks accepting filed disclosure forms from taxpayers that do not completely describe the potentially abusive transaction. Even if the promoter is not a material advisor, IRS still can request lists of investors. Promoters of abusive schemes who do not provide lists to IRS when requested could continue their schemes. Receiving investor lists from these promoters sooner enables IRS to more quickly determine any harm to the government and obtain injunctions working with the Department of Justice to stop abusive promoter activity. OTSA officials said they were unaware of IRS comprehensively tracking how often lists requested from material advisors were not received on time. IRS had data on how often material advisors were penalized for not keeping the lists or providing them on time. For 2008 and 2009, OTSA received 11 investor lists within the required 20 business days after the request under section 6112 and did not need to assess timeliness-related penalties under IRC section 6708. Outside of OTSA, IRS assessed five penalties against material advisors for requested investor lists during 2008 and 2009. Unlike for material advisors, non-material advisors are not subject to the 20-business-day standard for timeliness under section 6112, or to the section 6708 penalty for not meeting that timeliness standard. Based on our interviews with 27 SB/SE revenue agents who do promoter investigations, they generally agreed that many of the non-material advisors do not quickly provide the lists. We sought such information from SB/SE revenue agents who investigated promoters because most of their investigations involve non-material advisors and because SB/SE did not track how often and how quickly the requested investor lists are received. Fourteen agents provided data on how quickly they received the lists for 54 ongoing investigations of non-material advisors. These non- generalizable data show that IRS received 13 of the 54 requested lists (24 percent) within 20 business days of the request date. IRS received another 22 lists (41 percent) after the 20 days (7 months to receive on average). IRS had not received 19 lists, (35 percent) of which 18 had exceeded 20 business days. To induce non-material advisors to provide investor lists, the agents provided options and differing opinions on their possible impacts. Many agents said that they issued summonses for investor lists. Some said that bringing a summons to their first meeting with a promoter expedited receiving the lists while one agent said that some local IRS offices do not encourage bringing a summons to the first meeting. Agents said that extending the statute of limitations could help but raises difficulties. Taxpayers could be burdened by having to keep records longer. IRS officials also had concerns in certain circumstances, about relying on the extended statute of limitations provided by the AJCA for undisclosed listed transactions. Some revenue agents said that establishing a penalty on non-material advisor promoters who do not provide investor lists within 20 business days could help. Promoters who view their products as legitimate might quickly provide a list to avoid this penalty. This penalty might not prompt promoters who hide their transactions because they would not pay any penalty or they believe they can escape detection. The new penalty could be limited to those meeting the definition of a promoter for IRC sections 6700 and 6701. Another option for getting investor lists from promoters who are not material advisors would be to lower the thresholds for material advisors, which IRS had once considered. If these material advisor thresholds were lowered, more promoters of reportable transactions could be required to maintain lists and be penalized for not providing them. However, burdens would increase for promoters with legitimate products, and those who were not legitimate may still not be material advisors and may therefore avoid the requirement. The AJCA revised or added penalties to address abusive transactions. For the revised penalties, their annual number and aggregate dollar amount of penalty assessments increased at least part of the time since AJCA passage in 2004. For example, starting in 2004, the AJCA changed a penalty imposed by section 6700 on promoters of abusive tax shelters from a maximum of $1,000 to 50 percent of the gross income from a promotion. As a result, IRS assessed penalties over $1 million against some promoters. Compared to 2004, the annual aggregate number and amount of penalty assessments was higher through 2009, as figure 4 shows. AJCA provisions revised two penalties for IRS’s use against abusive transactions (see app. II for details). For example, one provision— creating IRC section 6662A—augmented the existing accuracy-related penalty with an accuracy-related penalty for reportable transaction understatements. From fiscal year 2005 through 2009, the number of penalties and aggregate dollar amount generally rose each year. The AJCA added or amended three reportable-transaction disclosure penalties, applying to taxpayers in the first case and material advisors in the others (see app. II for details). A new penalty imposed by IRC section 6707A is for taxpayers who fail to adequately disclose reportable transactions. Most of the penalty assessments were for $100,000 or $200,000. A new penalty imposed by section 6707 is for material advisors who fail to adequately disclose reportable transactions. Compared to the 6707A penalty on taxpayers, fewer material advisors were penalized, but most of their penalty amounts exceeded $1 million, resulting in higher aggregate penalty amounts. A penalty imposed by section 6708 is for material advisors who fail to maintain investor lists or provide them to IRS within 20 business days after the request. Most penalty assessments ranged from $740,000 to $1.1 million. Figure 5 shows the number and dollar amount of assessments for the first two penalties for not adequately disclosing reportable transactions— section 6707A against taxpayers and section 6707 against material advisors. After IRS’s increased use of the enhanced penalty sanctions under the AJCA, the Congress amended section 6707A, decreasing the penalty amounts for some cases. Many small businesses had received penalty assessments that exceeded the benefits gained through the transactions. The Small Business Jobs Act of 2010 lowered the penalty amounts (see app. II for changes). Because the 2010 law change was retroactively effective for penalties assessed after December 31, 2006, adjustments on many assessments were needed. IRS officials said that they adjusted 898 closed cases as well as about 100 cases with penalty assessments that were still open as of February 2011. Because ATATs have been a long-standing, ever-changing, and often a hidden problem for IRS, much activity in this area is left to IRS’s judgment. For the same reasons, no set of actions taken by IRS would completely eliminate the problem. IRS has shown vigilance in pursuing ATATs with a number of programs and offices trying to attack the problem from different perspectives. While measuring the impact of IRS’s efforts is challenging, having more information on the results of its enforcement efforts such as why investigations were closed without penalties or injunctions would better inform IRS management when making judgments about program effectiveness and resource allocation. In addition, if IRS improved the consistency and accuracy of its tracking and reporting of both its ATAT and non-ATAT examination results, the information could be more meaningful to managers as well as to the Joint Committee on Taxation. Further, more could be done to ensure compliance with disclosure requirements by material advisors and taxpayers. If OTSA could verify that it received all required disclosures and that the disclosures were complete, IRS would have more information to determine whether the transactions disclosed were appropriate. However, paper filing continues to be a barrier in processing disclosures, and actions to have more disclosures filed electronically would be beneficial. In addition, IRS has generally been successful in obtaining required disclosures and investor lists from material advisors. Promoters who do not meet the statutory definition to be a material advisor face no requirements to provide IRS with their list of investors within 20 business days after IRS requested a list or no penalties for failing to do so. If IRS started to monitor the timeliness of its receipt of requested investor lists, IRS would be able to determine when actions are needed to obtain the lists sooner. IRS also could consider taking steps to get the lists sooner. Various administrative steps, such as not always having a summons in hand when meeting a suspected promoter, slow IRS. Addressing such concerns could help ensure that promoters and taxpayers are complying with congressional intent in requiring provision of the investor lists and better position IRS to ensure that taxes legally due to Treasury are paid. The Congress should consider instituting a penalty on non-material advisor promoters for failing to provide investor lists to IRS within a specified time period when requested, comparable to the 20-business-day requirement for material advisors. We recommend that the Commissioner of Internal Revenue take the following ten actions: 1. To focus resources on promoter investigations most likely to stop abuse, establish a process to ensure that field office staff consistently apply the recently created reason codes for closing investigations without penalties or injunctions, and document how the results are analyzed and used in decisions on investigations to start. 2. To improve reporting on the results of examinations on ATAT issues, a. require all divisions to supply similar, consistent results from b. separately track the tax amounts recommended, assessed, and collected between ATAT issues and non-ATAT issues; and c. establish a process to review the accuracy of examination data prior to its inclusion in future reports to the Joint Committee on Taxation. 3. To ensure that Forms 8886 filed with tax returns are also filed with OTSA, after establishing a new indicator for paper and electronic tax returns, establish a process to periodically check whether the filers met their filing obligations with OTSA. 4. To improve IRS’s next study of whether Form 8886 should be filed electronically, identify how often filers already use computers to prepare these forms. 5. To ensure material advisor disclosure forms are filed, investigate why partnerships and S corporations often did not file a form with OTSA even though they reported on their tax returns that they filed the form with IRS or had a requirement to file. 6. To correct problems with its review of the completeness of disclosure forms, ensure that OTSA establishes a new process to review completeness and monitor its success. 7. To monitor the timeliness of investor list receipts, comprehensively track the elapsed days it takes for material advisors and non-material advisors to provide the lists to IRS. 8. To induce non-material advisors to provide investor lists to IRS within a specified time, take steps such as requiring IRS staff to bring a summons for an investor list to the first interview with a suspected non-material advisor, and reevaluating the idea of lowering material advisor dollar thresholds. We sent a draft of this report to the Commissioner of Internal Revenue for comment. We received written comments on the draft from IRS’s Deputy Commissioner for Services and Enforcement on April 29, 2011 (for the full text of the comments, see app. III). IRS agreed that better data may lead to better resource allocation decisions and improved ATAT enforcement efforts. Of our ten recommendations, it fully agreed with seven, disagreed with one (number 7), and partially agreed with two (numbers 8 and 2b). In describing actions on the recommendations with which IRS had agreements, the Deputy Commissioner stated that IRS would do the following: update the Internal Revenue Manual’s handling of reason codes for surveying or discontinuing investigations and evaluate whether any of the reason data collected warrant changing how investigations are selected; ensure that IRS uses the same databases and methodologies (such as across IRS divisions) for public reporting on the examination results of ATAT issues; develop criteria for consistently using IRS examination result data and a consistent methodology for validating the data before they are released (such as to the Joint Committee on Taxation); establish a new indicator and a process to regularly review whether filers met their disclosure obligations with OTSA; improve its next study of whether Form 8886 should be filed electronically by identifying how many Form 8886 filers use computers to prepare the form; test mismatches of partnership and S corporation information with OTSA information to identify potentially unfiled forms; and formalize procedures to identify, evaluate, and follow up on incomplete disclosures. IRS disagreed with our recommendation on comprehensively tracking the elapsed time for any advisors to provide investor lists when IRS requests. IRS commented that the information currently contained in individual case files reflects when information has been requested and received, but that resource and capability constraints may outweigh the benefits of capturing this additional information on a systematic basis. We agree that costs and benefits must be carefully weighed. In that regard, IRS’s data collection would not have to be elaborate. For instance, SB/SE officials already send data on the investor lists received to a central list keeper. These officials also could send the dates when each list was requested and received to that same office. In that way, SB/SE could see if the slowness in receiving some lists that we found is prevalent across the division, and other divisions could do the same thing. If the slowness is prevalent, IRS officials would then have the information needed to make decisions on whether IRS is doing all it can to quickly determine and address any harm to the government. Finally, the data collection is possible in that we were able to collect such data from some revenue agents on the timeliness of investor lists received. IRS partially agreed with our recommendation on two options for inducing non-material advisors to provide investor lists within a specified time. It did not fully agree with the first option. In lieu of requiring summonses to be prepared for the first meeting with non-material advisors, IRS stated that its Internal Revenue Manual would recommend that IRS agents consider preparing summonses to use at initial meetings with possibly problematic non-material advisors. We encourage IRS to track how often IRS agents provide summonses at these meetings in the future and whether their doing so expedites obtaining non-material advisor investor lists. If IRS still finds obtaining the lists difficult after changing the Internal Revenue Manual, we encourage it to take additional steps to receive the lists more quickly. IRS agreed with the second option on reevaluating whether lowering material advisor thresholds would be useful. It said it would gather input to make that determination. IRS also partially agreed with our recommendation that it separately track the tax amounts recommended, assessed, and collected between ATAT issues and non-ATAT issues. Although IRS agreed that tracking these amounts by issue (rather than by case as is currently done) might provide valuable information for management, it cited resource and capability constraints in doing the tracking. Recognizing the value of tracking this management information, IRS should explore approaches to leverage its resources in order to provide more accurate and consistent data on the results of its examinations. This can help better inform IRS and the Congress about whether the ATAT examinations are an efficient use of resources in producing desired impacts. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or at whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The statistics we analyzed came from many sources. We obtained data related to promoter investigations from the Lead Development Center’s (LDC) database within IRS’s Small Business/Self-Employed division. Information about examinations came from IRS’s Audit Information Management System-Computer Information System (A-CIS). IRS collected information on settlement initiatives from initiative participants throughout the organization. To obtain information on why promoter investigations were either discontinued or surveyed by IRS, we obtained documentation on these investigations from LDC for fiscal year 2009. LDC received this documentation, which provided the reasons the promoter investigations were discontinued or surveyed, from examiners who performed the investigations. Of 130 investigations that were either discontinued or surveyed, we analyzed documentation on 97 investigations. We could not analyze the remaining 33 investigations because either LDC did not receive the documentation or the documentation was incomplete. From our analysis of the 97 investigations, we identified the reasons these investigations were either discontinued or surveyed. To evaluate the results of IRS’s implementation of the AJCA, we selected those sections of the act for which IRS had data on the disclosures and penalties. We used data from different sources. We obtained ATAT disclosure information from the reportable transaction and material advisor databases kept by the Office of Tax Shelter Analysis (OTSA) and penalty information from IRS’s Enforcement Revenue Information System (ERIS). Criteria we used to evaluate the AJCA’s results included whether (1) OTSA received all the reportable transaction and material advisor forms it should have, (2) submitted reportable transaction disclosure forms met OTSA’s standard for completeness, (3) IRS received investor lists from material advisors within 20 business days of the time requested, and (4) the AJCA’s introduction of new penalties and penalty amounts increased the annual number and aggregate dollar amount of ATAT penalties assessed. To determine if IRS’s requirement for material advisor disclosures to be filed with OTSA was met, we tested the extent for partnership and S corporation tax returns. The returns we tested were in the IRS Statistics of Income (SOI) division’s samples of partnership and S corporation tax returns for 2007, the last year for which we had information during our SOI work. These returns had a line item asking taxpayers if they had disclosed, or needed to disclose, information about material advisors. For those answering “yes,” we confirmed whether OTSA’s material advisor database showed them filing a material advisor form from the time the AJCA was enacted through much of 2010. After discovering that the database often showed no material advisor forms filed, we followed up with OTSA. To determine how much time elapsed from when IRS requested lists of investors with non-material advisor promoters until when it received them, we used an IRS spreadsheet provided in August 2010 of open investigations for which IRS had received lists. This spreadsheet showed investigations being conducted by 90 investigators. We used this spreadsheet to pinpoint IRS investigators from whom we could collect information, rather than to project to a universe of investigators or investigations. From the spreadsheet, we selected the 11 investigators with the most open investigations. We selected another 20 investigators randomly. We also asked to meet with all of the selected investigators in groups to ask general questions about their impressions of how easy or hard it was to obtain the lists. We met with or received written answers from 27 investigators. At our instruction, IRS sent each of the selected investigators a template asking for the dates on which investor lists were requested and received for each investigation. Fourteen investigators provided dates on when investor lists were requested and received from promoters who were not identified as tax return preparers. We excluded preparers because they are required to submit copies of tax returns or the names of taxpayers for whom they prepared tax returns to IRS when requested. We found the IRS databases we used to be reliable for the purposes of this report. We had tested the reliability of A-CIS, ERIS, and SOI data for previous reports, and we supplemented our knowledge through interviews with IRS officials and through documentation review. For LDC and OTSA databases, we reviewed documentation and interviewed IRS officials. When we matched OTSA and SOI data, where appropriate, we ran electronic checks and compared output to other information for reasonableness purposes. We conducted this performance audit from July 2009 through May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The American Jobs Creation Act of 2004 (AJCA) provided new or revised tools related to what it called tax shelters. For example, it established requirements for material advisors to disclose reportable transactions and provide lists of their investors to IRS. It also added or revised penalties and other sanctions, such as censures and injunctions. Information follows on the AJCA-created or AJCA-changed sections of the Internal Revenue Code (IRC) that we reviewed. Following that is similar information related to amendments to title 31 of the United States Code, dealing with money and finance. The AJCA provision amending section 6111 repealed the law on registering tax shelters as defined therein and began requiring each material advisor to describe any reportable transaction and its potential tax benefits on an information return filed with IRS on a timely basis. The main information return submitted by material advisors is the Material Advisor Disclosure Statement (Form 8918), which superseded the Application for Registration of a Tax Shelter (Form 8264). Table 3 shows the numbers of these forms received from 2003 through 2009. The AJCA provision amending section 6112 required that a material advisor must keep a list identifying each person for whom the advisor acted as a material advisor for a reportable transaction, and provide the list to the Secretary when requested in writing. Table 4 shows the number of lists that IRS’s Office of Tax Shelter Analysis (OTSA) requested from 2006 through 2009. The AJCA amended section 6501 to extend the statute of limitations for IRS to assess taxes related to undisclosed listed transactions. Generally, the statute of limitations runs for 3 years after a tax return is filed or due, whichever is later. As amended by the AJCA, the statute of limitations with regard to listed transactions can extend beyond 3 years up to 1 year after the earlier of the date that (1) the taxpayer discloses pursuant to section 6011, or (2) a material advisor satisfied the Secretary’s request for an investor list under section 6112, including the name of the taxpayer in question. According to IRS, it did not have systemic data on whether assessments were made pursuant to section 6501(c)(10) because each case is different and systemic information would be unreliable. The AJCA provision creating section 6662A augmented the existing 20 percent accuracy-related penalty of section 6662 with a new accuracy- related penalty for understated income from reportable transactions. If a taxpayer disclosed a reportable transaction, the penalty would equal 20 percent of the understatement amount. If the taxpayer did not disclose the transaction, the penalty would equal 30 percent of the understatement amount. Table 5 shows an increase in the number of these penalties for fiscal years 2005 through 2009 as well as in the number of abatements, or reductions, of those penalties. The AJCA amended section 6700 to change the penalty amounts. Section 6700 imposes a penalty on persons who (1) organize or assist in the organization of any entity, plan, or arrangement or (2) participate, directly or indirectly, in the sale of any interest in an entity, plan, or arrangement. For the section 6700 penalty to apply, the person must also make, furnish, or cause another person to make or furnish (1) a gross valuation overstatement (as defined therein) as to any material matter or (2) a statement with respect to any tax benefit by reason of holding an interest in the entity or participating in the plan or arrangement. Further, the person to whom the penalty applies must know or have reason to know that the statement is false or fraudulent in any material matter. Prior to the enactment of the AJCA, the maximum penalty under section 6700 was $1,000 for each activity (entity or arrangement). The AJCA changed the penalty imposed on someone who knowingly makes a false statement (but not to making a gross valuation overstatement) to 50 percent of the person’s gross income from activity involving that statement. Table 6 shows penalties assessed under section 6700 from fiscal years 2004 through 2009. The AJCA provision amending section 6707 repealed the penalty for failure to register tax shelters and established a new penalty. The new penalty imposes on material advisors who fail to disclose reportable transactions or who file false or incomplete information a $50,000 penalty, unless the failure is related to a listed transaction; if the failure is related to a listed transaction, the amount is increased to the greater of $200,000 or 50 percent (75 percent for an intentional failure or act) of the gross income from the transaction. Table 7 shows assessments of this penalty from fiscal years 2005 through 2009. The AJCA provision creating 6707A established a penalty on any person who fails to include with any return or statement any required information on a reportable transaction. Generally, as amended by the Small Business Jobs Act of 2010, the penalty is 75 percent of the decrease in tax shown on the return resulting from the transaction, or which would have resulted if the transaction complied with federal tax laws. The maximum penalty amount is the same as the penalty amount prior to the change which is $50,000 ($10,000 for an individual), except for listed transactions for which the penalty is $200,000 ($100,000 for an individual). The minimum penalty amount is $10,000 ($5,000 for an individual). Table 8 shows the assessments for this penalty from fiscal years 2005 through 2009. However, IRS was adjusting these amounts in light of the 2010 amendment. The AJCA provision amending section 6708 modified the penalty for failing to maintain the required lists by making it a time-sensitive penalty instead of a per investor penalty. Thus, a material advisor required to maintain an investor list who fails to make the list available upon written request to the Secretary within 20 business days after the request will be subject to a $10,000 per day penalty. Table 9 shows assessments for this penalty from fiscal years 2005 through 2009. The IRC authorizes civil actions to enjoin anyone from promoting abusive tax shelters or aiding or abetting tax liability understatements. The AJCA expanded this rule so that an injunction could be sought to enjoin a material advisor from engaging in specific conduct subject to penalty under (1) section 6707, failure to file an information return for a reportable transaction, or (2) section 6708, failure to maintain or to furnish within 20 business days of the Secretary’s written request a list of investors for a reportable transaction. According to the Lead Development Center (LDC), it does not track injunctions specifically under section 7408. Table 10 shows the number of injunctions that LDC obtained regardless of IRC section from fiscal years 2003 through 2009. Before the AJCA, the Secretary was already authorized to suspend or disbar from practice before the department someone’s representative who was incompetent, was disreputable, violated rules regulating practice before the department, or with intent to defraud, willfully and knowingly misled or threatened the person being represented or who might be represented. The AJCA provision related to this section expanded the sanctions the Secretary could impose for these matters in two ways. First, it expressly permitted censure as a sanction. Second, it allowed imposing a monetary penalty as a sanction as long as the penalty did not exceed the gross income from the relevant conduct. The penalty could be in addition to or instead of any suspension, disbarment, or censure of the representative. According to Treasury’s Office of Professional Responsibility (OPR), OPR was already censuring before the AJCA was enacted. The act clarified OPR’s authority. Table 11 shows the number of OPR censures in fiscal years 2003 through 2009. OPR officials said OPR had not assessed any monetary penalties. Before the AJCA, citizens, residents, or persons doing business in the United States could be penalized if they willfully did not keep records and file reports when they made a transaction or maintained an account with a foreign financial entity. The AJCA added a civil penalty of up to $10,000 that could be imposed on anyone violating the reporting requirement, whether willfully or not. The AJCA also increased the prior-law penalty for willful behavior to the greater of $100,000 or 50 percent of the amount of the transaction or account. Table 12 shows the penalties imposed under section 5321 from 2003 through 2009. In addition to the contact named above, Ralph Block and Thomas Short, Assistant Directors; Virginia A. Chanley; Laurie C. King; Lawrence M. Korb; Karen V. O’Conor; Ellen M. Rominger; Lou V. B. Smith; Andrew J. Stephens; and James J. Ungvarsky made key contributions to this report.
Abusive tax avoidance transactions (ATAT) range from frivolous tax schemes to highly technical and abusive tax shelters marketed to taxpayers by promoters selling tax advice. ATATs threaten the U.S. tax system's integrity if honest taxpayers believe that others do not pay their fair share of taxes. GAO was asked to (1) describe what is known about trends in ATAT usage; (2) describe results of IRS's ATAT enforcement efforts; and (3) evaluate IRS's implementation of the ATAT provisions in the American Jobs Creation Act of 2004. Using criteria from the act, GAO analyzed statistics and other documents on trends and results and interviewed IRS and other tax experts. While trend data on taxpayers' use of ATATs are limited, IRS and other experts GAO contacted agreed that a problem exists and is continually changing. One theme that emerged from GAO's discussions with these experts is that ATATs marketed by promoters to corporations and wealthy individuals have declined in recent years, although the experts had different views on the extent of the decline. They also said that ATATs have become more international in nature. Even though estimating the extent of the ATAT problem is inexact because ATATs are often hidden, the experts believed that the changing nature of ATATs warrants continuous IRS vigilance. IRS has many ATAT-related enforcement efforts--investigations, examinations, and settlement initiatives--across different divisions but has incomplete data on the results on those efforts. For example, IRS's small business division's promoter investigations help stop promotions, but IRS had incomplete information on why investigations often closed without penalties or injunctions, information that could be used to help decide the types of investigations to start. In addition, IRS recommended billions of dollars in additional taxes from examining tax returns with suspected ATATs, but IRS did not identify the part of the additional amount that was collected or that related to the ATAT issue as opposed to other issues. In addition, some ATAT results were reported inconsistently across IRS divisions. Without comprehensive or consistent information, IRS does not have the best information to decide which promoters to investigate and the number of examinations that should be done as well as to evaluate their impacts. Even though the 2004 act increased the requirements for taxpayers and promoters to disclose their use of transactions and enhanced the penalties for improper disclosure, problems existed. IRS received many disclosures of transaction use from taxpayers, but it had no assurance that its Office of Tax Shelter Analysis received all the disclosures it should have. In addition, IRS did not verify that all the disclosures it received were complete, and a new process for reviewing the completeness of disclosures and following up with taxpayers was not yet finalized. Not receiving disclosures or receiving incomplete disclosures of transactions would keep IRS from having information needed to identify the transactions that merit an examination of their appropriateness and to assess related penalties as needed. Finally, certain promoters who are required by law under threat of penalty to give their list of investors within 20 business days after IRS requested it did so. However, other promoters who are not covered by this requirement often took longer than 20 days to provide the lists without the threat of a similar penalty. IRS did not comprehensively track how quickly the lists were received. Not receiving lists on a timely basis prevents IRS from quickly working to stop promoter activity. GAO suggests that Congress consider instituting a penalty aimed at certain promoters not giving investor lists to IRS within a specified time. GAO also recommends IRS act or establish processes to (1) improve data on the results of ATAT-related investigations and examinations, (2) ensure that required disclosures are filed by taxpayers, (3) review disclosures for completeness; (4) track the time for IRS to receive investor lists; and (5) induce more promoters to provide investor lists by a specified time. In commenting on a draft of this report, IRS agreed with most recommendations but cited resource and capability constraints in tracking ATAT data and investor lists, which GAO believes can be addressed.
The Foundation was established as an independent executive branch agency in 1992 to honor Morris K. Udall’s 30 years of service in the House of Representatives as a leader on issues related to the environment and Native Americans. In 2009, its authorizing legislation was amended to also honor Stewart L. Udall’s public service legacy. The Foundation is committed to educating a new generation of Americans to preserve and protect their national heritage through scholarship, fellowship, and internship programs focused on environmental and Native American issues. The Foundation consists of the Morris K. Udall and Stewart L. Udall Trust Fund, which is used to operate the Foundation’s education programs (Education Trust Fund), and the Environmental Dispute Resolution Fund. The latter fund is available to the Foundation to operate the U.S. Institute for Environmental Conflict Resolution (U.S. Institute), which was established by the Environmental Policy and Conflict Resolution Act of 1998 to promote the principles and practices of environmental conflict resolution and to assist in resolving conflict over environmental issues involving federal agencies. The Foundation had 22 full-time employees as of March 31, 2015. The Foundation depends on federal appropriations for the majority of its operations and received no- year appropriations of roughly $5.5 million and $5.4 million in fiscal years 2014 and 2015, respectively. Under its authorizing legislation, the Foundation is subject to the supervision and direction of the Board of Trustees (Board), which consists of 13 trustees, 11 of whom are voting members of the Board. The authorizing legislation charges the Board with appointing the Executive Director and setting his or her compensation. Further, the Foundation’s operating procedures provide that the Board appoints senior management staff members and sets their compensation; approves the organizational structure for the Foundation’s staff; approves the Foundation’s budget and arranges for an annual financial audit; sets policies, including internal controls, for the conduct and management of the agency’s finances, personnel, and programs to be implemented by its staff; and approves the strategic direction and priorities for the Foundation. Over the past 3 years, the Foundation has undergone several external reviews of its internal control policies and procedures, as shown in figure 1. The Foundation developed a Corrective Action Plan during fiscal year 2013 to address the findings identified in the DOI OIG December 2012 audit report and related financial management weaknesses the Foundation identified and to improve internal controls over its key financial management processes. Major elements of the fiscal year 2013 Corrective Action Plan included (1) performing a complete assessment of the Foundation’s current internal control structure to identify adequate, inadequate, and missing controls and (2) developing (or contracting to have developed) policies and procedures to implement appropriate internal controls in all areas where inadequate or missing controls were identified. In 2013, the Foundation contracted with an external consultant to perform an internal control review with an overall goal of achieving compliance with OMB Circular No. A-123 and Standards for Internal Control in the Federal Government. The external consultant’s September 2013 report assessed the Foundation’s implementation of 34 control activities and found no significant deficiencies. Specifically, the external consultant reported that 25 control activities were adequate, 7 control activities had operational deficiencies, and 2 had design deficiencies. The Foundation exercised its option to order additional services from the external consultant to perform a follow-up report assessing the implementation of the recommendations included in its September 2013 report. In February 2014, the external consultant issued its follow-up report, which reported that all seven operational deficiencies and one of the design deficiencies had been corrected. Although the other design deficiency, which was related to controls over the delegation of authority, had not been corrected, the external consultant reported that the Foundation had implemented compensating controls in this area. In January 2014, the Foundation’s fiscal year 2014 appropriation assigned the DOI OIG responsibility for providing oversight for the Foundation and provided funding to the DOI OIG to conduct investigations and audits of the Foundation. In September 2014, the DOI OIG issued an inspection report on the Foundation’s internal controls. The DOI OIG’s report objectives were to determine whether the Foundation’s internal controls were consistent with accepted internal control standards and applicable laws and regulations in the areas of (1) personnel actions, (2) contracting actions, and (3) internal control monitoring and assessment processes under FMFIA. Its review was limited to the design of the internal controls at the Foundation and did not include a determination as to whether the internal controls were operating effectively. The DOI OIG concluded that in the areas reviewed, the Foundation’s internal controls appeared consistent with accepted standards and applicable regulations. However, the DOI OIG (consistent with GAO’s December 2013 report) noted that the extent to which the new internal controls help the Foundation successfully comply with applicable laws and regulations will depend on the continued involvement and rigorous oversight of the Board. In December 2014, the Foundation renewed its 5-year interagency agreement for assisted acquisitions with the DOI Interior Business Center’s (IBC) Acquisition Services Directorate. With the exception of contracts below the micropurchase threshold of $3,000, the agreement specifies that IBC will perform technical evaluations of contract proposals and award contracts on the Foundation’s behalf, based on best value and within Federal Acquisition Regulation (FAR) guidelines. IBC provides comprehensive acquisition services to federal agencies, managing the entire process from planning, soliciting, and evaluating offers to awarding and administering contracts through closeout. Pursuant to the terms of the interagency agreement, IBC has agreed to assist the Foundation on contracts related to environmental conflict resolution activities, including mediation, facilitation, and assessment services. An IBC official serves as the Contracting Officer for contracts that IBC awards to private service providers on the Foundation’s behalf. The Foundation has made significant progress in improving its internal control environment by hiring experienced senior-level management officials, establishing a senior leadership team, and providing internal control and ethics training. The Foundation has also made significant progress in improving its risk assessment activities by conducting a risk assessment survey and developing written internal control policies and procedures. Foundation management gains knowledge about the daily operation of internal controls from the direct involvement it has with the operations of the Foundation’s programs and activities. Such knowledge serves as an important part of the Foundation’s monitoring activities and provides the primary basis for management’s annual internal control assessment. The Foundation has made significant progress in developing and implementing changes to improve its internal control environment, including the following. Hiring of experienced senior-level management. In July 2013, the Foundation hired a General Counsel with extensive experience in providing legal advice in the areas of contracts, personnel, ethics, fiscal, bankruptcy, and administrative law. The General Counsel has been involved in updating and developing several of the Foundation’s personnel and ethics policies for compliance with applicable federal service and government ethics laws and implementing regulations. In June 2014, the Foundation hired a new Director of Finance and Operations with extensive experience in implementing and improving internal controls at other federal government agencies, and who has been involved in conducting risk assessments and improving internal controls at the Foundation. Establishing a senior leadership team. The Foundation established the Foundation Leadership Team (FLT) consisting of six senior executives including the Executive Director, General Counsel, Director of Finance and Operations, Director of the U.S. Institute, Director of Education Programs, and Director of the Washington, D.C., office. The FLT, with authority from the Executive Director, is responsible for, among other things, assessing internal control over financial reporting. This includes clearly communicating the objectives of the risk assessment survey. Providing internal control and ethics training. The Director of Finance and Operations has conducted training on internal control with Foundation staff, which covered the definition of internal control, common internal controls, why internal control is required, and the five elements of internal control included in Standards for Internal Control in the Federal Government, among other internal control-related matters. In addition, the General Counsel has conducted ethics training with Foundation staff, which covered ethical principles, laws governing federal employee conduct, the Office of Government Ethics’ regulations entitled Standards of Ethical Conduct for Employees of the Executive Branch, conflicts of interest, outside activities, and postemployment restrictions. The General Counsel has also provided ethics training to the Board, which covered many ethical principles applicable to members of the Board. The Foundation has also made significant progress in developing and implementing changes to improve its risk assessment activities, including the following. Conducting a risk assessment survey. The FLT surveyed all of the Foundation’s employees regarding the impacts and likelihood of the most significant risks to the Foundation and asked the employees to indicate the elements in their work that they considered most vulnerable. The Executive Director, the Director of Finance and Operations, the General Counsel, and the Director of Education Programs used the results of the employee risk assessment survey, which included the identification of over 130 different risks, to prioritize the most significant risks to the Foundation. Based on this prioritization, the most significant risks identified by the FLT included documentation and communication of the contract procurement process, purchase card use, the travel process, and travel card use. According to documents from the October 16, 2014, Board meeting, the Director of Finance and Operations stated that the Foundation’s short-term vision for internal controls is to continuously evaluate the Foundation’s work and improve internal controls, along the way eliminating the most significant risks identified in the employee risk assessment survey, over time ensuring that the items being resolved will be of lower risk. In addition, the FLT plans to use the results of the employee risk assessment survey, and other assessment activities, to assess internal control over financial reporting and to develop further internal control policies and procedures in the most significant risk areas. Foundation management indicated that continuous risk assessment and monitoring will be performed and improved upon each year. The Foundation also communicated the results of the employee risk assessment survey to Foundation staff, including the most significant risks identified by the FLT. The Foundation has developed, or is in the process of developing, formal written policies related to those risks, which included documentation and communication of the contract procurement process, purchase card use, the travel process, and travel card use. Developing written internal control policies and procedures. The Foundation has developed, or is in the process of developing, written internal control policies and procedures. For example, since July 1, 2014, the Foundation updated or implemented the following changes to its documented internal control policies and procedures. Travel. Implemented mandatory annual training requirement for travelers; required staff arranging lodging and meetings for large groups to use the process to request quotes from outside vendors or contractors to increase competition; required the Foundation to negotiate a lower per diem rate for staff in travel status over 30 days; implemented the Director of Finance and Operations review of all travel vouchers prior to their being approved for payment; and required all staff to fly to destinations using federal city-pair fares when available. Purchase cards. Implemented mandatory annual training requirements for purchase card holders; reduced the number of purchase card holders to three (two in Tucson, one in Washington, D.C.); and required purchases over the micropurchase threshold to be procured with a purchase order rather than a purchase card. Procurement process. Issued a draft of a new internal contracting policy that reinforces adherence to the requirements of the FAR by entering into an interagency agreement with the IBC for both assisted acquisition contracts and administrative service contracts. Other. Completed the creation of position descriptions for all Foundation staff, including the review of the position descriptions by the General Services Administration (GSA); created a Foundation pay schedule; created a standardized form for performance plans; and implemented the requirement for maintaining an inventory of sensitive property. According to Foundation officials, the FLT plans to use the results of the risk assessment survey, and other assessment activities, to continue developing internal control policies and procedures for those areas in which formal written internal control policies and procedures have not been developed, including areas in which internal control activities have been implemented but not documented in the Foundation’s internal control policies and procedures. In its fiscal year 2014 AFR, Foundation management provided an unqualified statement of assurance that internal controls in effect, from October 1, 2013, through September 30, 2014, provided reasonable assurance that the Foundation met the objectives of FMFIA. The Executive Director and the Director of Finance and Operations are integrally involved in substantially all of the Foundation’s financial transactions and activities. For example, the Executive Director and the Director of Finance and Operations review and approve contracts related to environmental conflict resolution activities, receipt of invoices, invoices for payment, credit card authorizations, employees’ biweekly payroll, and the monthly reconciliations of cash receipts and disbursements. As such, Foundation management gains knowledge about the daily operation of internal controls from the direct involvement it has with the operations of the Foundation’s programs and activities. Such knowledge serves as an important part of the Foundation’s monitoring activities and provides the primary basis for management’s annual internal control assessment. In addition to management’s knowledge gained from its integral and substantial involvement in the daily operations of the Foundation, management also considered the following sources of information, as suggested in the FMFIA implementing guidance in OMB Circular No. A-123, that contributed to the basis and support for the annual internal control assessment: External consultant’s assessments of internal control. As reported in our December 2013 report, the Foundation took action to comprehensively assess its internal controls and planned to make changes based on the results of that assessment. Specifically, the Foundation contracted with an external consultant to perform an internal control review with an overall goal of achieving compliance with OMB Circular No. A-123 and Standards for Internal Control in the Federal Government. In our December 2013 report, we determined that the design of this action was consistent with internal control standards related to monitoring operations and internal controls and with FMFIA requirements to assess the effectiveness of internal controls. The scope of work for the external consultant’s review included performing an assessment of the Foundation’s internal control structure at the time of the review to identify adequate, inadequate, and missing controls; recommending improvements to controls and the control environment; recommending procedures for annual monitoring and testing of controls; recommending a format for the annual statement of assurance; and reviewing and recommending improvements for communication of control responsibilities to the Foundation’s staff. The external consultant’s September 2013 report assessed the Foundation’s implementation of 34 control activities and identified no significant deficiencies. Of these, the external consultant determined that 25 control activities were adequate, 7 control activities had operational deficiencies, and 2 had design deficiencies. The seven operational deficiencies were related to contracting, charge card purchasing, records administration, data integrity, overtime and compensatory time policy, safety procedures, and food and refreshment policy for meetings. The two design deficiencies related to delegation of authority and property disposition. The Foundation exercised its option to order additional services from the external consultant to perform a follow-up report assessing the implementation of the recommendations included in its September 2013 report. The external consultant issued its follow-up report in February 2014 assessing the Foundation’s implementation of the recommendations included in the external consultant’s September 2013 report, which found that all seven operational deficiencies and the property disposition design deficiency identified in its September 2013 report had been corrected. The delegation of authority design deficiency had not been corrected, but compensating controls were put in place that the external consultant determined adequately compensated for any weakness caused by the control deficiency. The external consultant’s follow-up report concluded as follows: “Risk assessment has been strengthened by establishing the Director of Finance and Operations position and including risk assessment as part of the Director of Finance and Operations position description. Many improvements have been made to strengthen internal controls over the past year. A majority of the recommendations contained in the first report have already been implemented. The Foundation has made a concerted effort, and continues to take actions to ensure compliance with the requirements of OMB Circular No. A-123 and associated laws.” DOI OIG’s assessments of internal control. In September 2014, the DOI OIG performed a review of the Foundation to determine whether the Foundation’s internal controls were consistent with accepted internal control standards and applicable laws and regulations in the areas of (1) personnel actions, (2) contracting actions, and (3) internal control monitoring and assessment processes under FMFIA. The DOI OIG review was limited to the design of the internal controls at the Foundation and did not include a determination as to whether the internal controls were operating effectively. The DOI OIG concluded that the Foundation’s internal controls in the areas reviewed appeared consistent with accepted standards and applicable regulations. Audit of the Foundation’s fiscal year 2014 financial statements. During its audit of the Foundation’s financial statements for the year ended September 30, 2014, the Foundation’s independent auditor identified no deficiencies in internal controls that were considered a material weakness or a significant deficiency in financial reporting. Reviews of financial management systems. GSA performs payroll and financial services for the Foundation. These services include furnishing all necessary payroll support functions, receipt and disbursement of funds, financial reporting and related accounting functions, and execution of all investments in Department of the Treasury obligations. GSA is considered to be part of the Foundation’s financial management; however, Foundation management is responsible for the integrity and objectivity of the financial information presented in the financial statements. To support management’s annual assurance statement in its fiscal year 2014 AFR for the financial and payroll services provided by GSA, the Foundation relied on the independent service auditor’s Statement on Standards for Attestation Engagements (SSAE) No. 16 reports on GSA’s (1) Pegasys Financial Management System and (2) Payroll Accounting and Reporting (PAR) System. The SSAE No. 16 reports on the Pegasys Financial Management System and the PAR System covered the period of July 1, 2013, to June 30, 2014, and both were unqualified opinions. In addition, the Foundation also relied on the letters from GSA that notified the Foundation that from July 1, 2014, through September 30, 2014, there were no significant changes to the system controls for the Pegasys Financial Management System and the PAR System. The Foundation has made significant progress in designing and implementing internal control activities and improving internal control over certain of its personnel and contracting practices, which were highlighted in our December 2013 report. For example, the Foundation has developed a formal written Disciplinary Policy and an Outside Employment Policy and is in the process of drafting guidance for its contracting practices. In addition, the Foundation has implemented internal control activities over certain of its personnel and contracting practices. For example, we found that management reviewed and approved an employee’s completed outside employment application and the Contracting Officer’s Representative coordinated with the Program Managers to determine that all contracted work that has been billed is complete. However, the Foundation has not developed formal written internal control policies related to the hiring and separation of employees. Most of the payroll activities, including processing employee time cards and issuing payments, are outsourced to GSA through an interagency agreement that specifies that GSA serves as the principal advisor on matters related to human resource management. However, the Foundation maintains responsibility for all other personnel functions. The Foundation’s General Counsel has issued certain personnel policies, such as the Disciplinary Policy and the Outside Employment Policy, to help ensure that the Foundation’s policies comply with applicable federal civil service and government ethics laws and implementing regulations. However, the Foundation did not have formal written policies related to the hiring and separation of employees. Despite this, we found that controls over employee hiring, separation, and outside employment had been implemented effectively during our period of review. Design of internal controls over employee hiring and separations. The Foundation acknowledged that it did not have formal written internal control policies and procedures related to the hiring of employees and did not have complete formal written internal control policies related to the separation of employees, including those who separate voluntarily (such as through retirement or a change in employment) or involuntarily. The General Counsel has developed the formal written Disciplinary Policy for an employee’s removal pursuant to disciplinary action and poor performance, which relates to involuntary separations. Foundation officials stated that they had not yet documented these internal control policies and procedures because they focused their efforts during fiscal year 2015 on (1) developing formal written internal control policies and procedures that were identified by the FLT’s risk assessment survey and (2) implementing control activities in areas in which the Foundation has not yet had the opportunity to develop formal written internal control policies and procedures. During our previous audit, Foundation officials informed us that they planned to develop and complete formal written internal control policies and procedures over certain personnel practices by early 2014, such as the Disciplinary Policy and Outside Employment Policy. However, such plans did not include developing formal written internal control policies and procedures for hiring of employees and certain other separation processes, and the Foundation’s Corrective Action Plan did not include plans to do so. Federal standards for internal control require that internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. A lack of formal written internal control policies and procedures related to the hiring and separation of employees increases the risk that (1) procedures related to hiring and separations, such as management’s review and approval of an employee’s qualifications and proposed salary before hire and ensuring that separated employees are properly removed from the payroll, may not be properly or consistently carried out and (2) applicable laws and implementing regulations may not be consistently followed. Implementation of internal controls over employee hiring. Although the internal control policies and procedures were not fully documented, we found that the Foundation had sufficiently implemented key internal control activities related to the hiring of employees. We interviewed Foundation officials and conducted walkthroughs of specific new hire transactions and determined that the Foundation performed certain key internal control activities relating to the hiring of employees during the test period of our audit. Such key internal control activities included management’s review of GSA position descriptions and noting that each new employee’s salary was within the range of pay in the GSA position description, management’s review and approval of the new employee’s Office of Personnel Management (OPM) request for personnel action, management’s review of monthly detailed spreadsheets attesting to the addition of the new employee, and management’s review of monthly reports attesting to the addition of the new employee’s salary. We assessed the design of these key internal control activities and found them to be consistent with criteria in Standards for Internal Control in the Federal Government. We tested these key internal control activities for all five employees who were newly hired during the test period of our audit, and found that they had been implemented effectively. Implementation of internal controls over employee separations. Similar to hiring, we found that although the separation policies and procedures were not fully documented, the Foundation had sufficiently implemented key internal control activities related to the separation of employees. We interviewed Foundation officials and conducted walkthroughs of specific employee separation transactions and determined that the Foundation performed certain key internal control activities related to the separation of employees during the test period of our audit. Such key internal control activities included management’s review and approval of the separated employee’s OPM request for personnel action, management’s review of monthly detailed spreadsheets attesting to the removal of the employee, and management’s review of monthly reports attesting to the removal of the employee’s salary. We assessed the design of these key internal control activities and found them to be consistent with criteria in Standards for Internal Control in the Federal Government. We tested these key internal control activities for all eight employees who voluntarily separated during the test period of our audit, and found that they had been implemented effectively. There were no involuntary separations during the period of our review. Design and implementation of internal controls over outside employment. The Foundation has a formal written Outside Employment Policy, which requires employees to obtain management approval prior to engaging in any outside employment, whether that employment is compensated or voluntary. Based on our review of the Outside Employment Policy, interviews of Foundation officials, and walkthroughs of specific outside employment transactions during the test period of our audit, we identified the Foundation’s key internal control activities for employees engaging in outside employment. Such key internal control activities included each employee’s completed outside employment application; management’s review and approval of the employee’s completed outside employment application; and a legal opinion prepared by the General Counsel, which included a review of applicable statutory and regulatory provisions to ensure that the employee was in compliance with applicable laws and regulations. We assessed the design of these key internal control activities and found them to be consistent with criteria in Standards for Internal Control in the Federal Government. We tested these key internal control activities for all four employees who applied for outside employment during the test period of our audit, and found that the Foundation’s key internal control activities for outside employment were implemented effectively in accordance with the Foundation’s Outside Employment Policy. The Foundation’s interagency agreement with IBC helps improve management and oversight of the environmental conflict resolution contracts that IBC services. The total amount paid on the environmental conflict resolution contracts was approximately $2.0 million for the 9- month period July 1, 2014, through March 31, 2015. In December 2014, the Foundation modified its agreement with IBC to include administrative contracts, which effectively removed the Foundation completely from the awarding and administration of contracts. As IBC is much more experienced in acquisition matters, this modification assists the Foundation in implementing sound contracting practices. Accordingly, the Foundation’s draft guidance over its contracting practices focuses on (1) the steps leading up to the submission of contracting transactions and information to IBC and (2) the reconciliation of the output of contracting activity to the reports received from IBC. The draft guidance details certain key internal control activities that the Foundation has implemented in this area. However, we noted that not all key internal control activities were included in the Foundation’s draft guidance. For example, the key control activity to document evidence of management’s receipt and review of contractors’ invoices, including a comparison of the scope and nature of services provided and labor hours billed, was not included in the draft guidance. Federal standards for internal control require that internal control and all transactions and other significant events be clearly documented and that the documentation be readily available for examination. Foundation officials stated that they had not yet fully documented and finalized internal control policies and procedures related to contracting because, as noted previously, they focused their efforts on implementing control activities in areas in which the Foundation had not yet developed formal written policies and procedures. As documented in its Corrective Action Plan, Foundation management planned to update and finalize its formal written internal control policies and procedures over its contracting practices. However, the Corrective Action Plan did not establish a date by which the Foundation planned to complete the action to fully update and finalize its formal written internal control policies and procedures over its contracting practices. A lack of fully developed and finalized written internal control policies and procedures related to the Foundation’s contracting practices increases the risk that procedures related to its contracting practices may not be consistently carried out, which in turn increases the risk that the Foundation may, for example, pay for erroneous amounts billed. Based on our review of the Foundation’s draft guidance on its contracting practices, interviews of Foundation officials, and walkthroughs of specific contracting transactions during the test period of our audit, we identified the Foundation’s key internal control activities. Such key internal control activities included management approval of checklists attesting that no conflicts of interest exist; evidence of management review and approval of contracts; and evidence of management receipt and review of contractor invoices, including a comparison of the scope and nature of services provided and labor hours billed. We assessed the design of these key internal control activities by comparing them to the criteria in Standards for Internal Control in the Federal Government and found them to be consistent with federal internal control standards. As discussed in the section below, we performed testing on a randomly selected statistical sample of contracting disbursements during the test period of our audit, and found that the Foundation’s key internal control activities over its contracting practices were implemented effectively in accordance with the Foundation’s draft guidance and other key internal control activities not yet documented. The Foundation has designed formal written internal control policies and procedures for processing receipts activity and disbursements activity (consisting of payroll, contracting, and other expense transactions). Based on our review of these internal control policies and procedures, we identified the key internal control activities related to the processing of receipts and disbursements, compared them to the criteria in Standards for Internal Control in the Federal Government, and found them to be consistent with federal internal control standards. We selected and tested random statistical samples of receipt and disbursement transactions for the test period of our audit and determined that the Foundation’s key internal control activities over the processing of its receipts and disbursements activity were implemented effectively in accordance with its formal written internal control policies and procedures. The Foundation has formal written internal control policies and procedures for processing its receipts activity. Based on our review of these internal control policies and procedures, interviews of Foundation officials, and walkthroughs of specific receipt transactions during the test period of our audit, we identified the Foundation’s key internal control activities for processing its receipts activity. Such key internal control activities included supervisory review of project data in the Foundation’s Program Management Database (PMD); management review and approval of all invoices, such as invoices for training services provided to external parties in PMD; and management review of the invoices spreadsheet attesting to the total amount of invoices. We assessed the design of these key internal control activities by comparing them to the criteria in Standards for Internal Control in the Federal Government and found them to be consistent with federal internal control standards. We selected and tested a random statistical sample of 36 receipt transactions made during the test period of our audit, and found that the Foundation’s key internal control activities over the processing of its receipts activity were implemented effectively in accordance with the Foundation’s formal written internal control policies and procedures. The Foundation’s disbursements activity consists of payroll, contracting, and other expense transactions. For services for which IBC pays the contractor and seeks reimbursement from the Foundation through GSA, GSA asks the Foundation which fund should be used to pay IBC. GSA charges the amount against the Foundation’s obligation balance and transfers the funds to IBC from the Foundation’s Fund Balance with Treasury account. For other expense transactions, such as rent and supplies, the Foundation receives vendor invoices and bills, which it then submits to GSA for payment to the vendor. Payroll disbursements. The Foundation has formal written internal control policies and procedures for processing its payroll disbursements activity. Based on our review of these internal control policies and procedures, interviews of Foundation officials, and walkthroughs of specific payroll disbursement transactions during the test period of our audit, we identified the Foundation’s key internal control activities for processing employee payroll. Such key internal control activities included management’s review and approval of requests for personnel actions, management’s review of monthly detailed spreadsheets attesting to the correct salary for employees, supervisory review of employees’ time cards and amendments, and management review and electronic sign-off on employees’ time cards and amendments. We assessed the design of these key internal control activities by comparing them to the criteria in Standards for Internal Control in the Federal Government and found them to be consistent with federal internal control standards. We randomly selected 10 biweekly pay periods from the 9-month period of our audit and then randomly selected five transactions from each pay period, which resulted in 50 payroll disbursement transactions during the test period of our audit. Based on our selection and testing of these payroll disbursement transactions, we found that the Foundation’s key internal control activities over the processing of its payroll disbursements activity were implemented effectively in accordance with the Foundation’s formal written internal control policies and procedures. Contracting disbursements. The Foundation has formal written internal control policies and procedures for processing its contracting disbursements activity. Based on our review of these internal control policies and procedures, interviews of Foundation officials, and walkthroughs of specific contracting disbursements transactions during the test period of our audit, we identified the Foundation’s key internal control activities for processing its contracting disbursement activity. Such key internal control activities included management approval of checklists attesting that no conflicts of interest exist; evidence of management review and approval of contracts; and evidence of management receipt and review of contractor invoices, including a comparison of the scope and nature of services provided and labor hours billed. We assessed the design of these key internal control activities by comparing them to the criteria in Standards for Internal Control in the Federal Government and found them to be consistent with federal internal control standards. We selected and tested a random statistical sample of 37 contracting disbursement transactions made during the test period of our audit, and found that the Foundation’s key internal control activities over the processing of its contracting disbursements activity were implemented effectively in accordance with the Foundation’s formal written internal control policies and procedures. Other expense disbursements. The Foundation has formal written internal control policies and procedures for processing its other expense disbursements activity, such as rent and utilities. Based on our review of these internal control policies and procedures, interviews of Foundation officials, and walkthroughs of specific other expense disbursement transactions during the test period of our audit, we identified the Foundation’s key internal control activities for processing other expense disbursements. Such key internal control activities included supervisory review of invoices; evidence of management receipt of invoices; management’s review and approval of all credit card authorization forms; and management’s review and approval of all invoices, which included evidence of receipt of services. We assessed the design of these key internal control activities by comparing them to the criteria in Standards for Internal Control in the Federal Government and found them to be consistent with federal internal control standards. We selected and tested a random statistical sample of 50 other expense transactions made during the test period of our audit, and found that the Foundation’s key internal control activities over the processing of its other expense disbursements activity were implemented effectively in accordance with the Foundation’s formal written internal control policies and procedures. Since the release of GAO’s report in December 2013, the Foundation has made significant progress in improving its internal control environment, risk assessment, and monitoring activities; designing and implementing internal control activities over certain of its personnel and contracting practices; and designing and implementing internal control activities over its receipts and disbursements activity. However, we found that the Foundation had not documented formal written internal control policies and procedures for its hiring of employees and certain other separation processes and did not include plans to do so in its Corrective Action Plan. In addition, the Foundation had not fully updated and finalized formal written internal control policies and procedures for its contracting practices, including all key internal control activities, such as evidence of management’s receipt and review of contractors’ invoices, and had not established a date by which it planned to complete the action to finalize its policies and procedures for its contracting practices. Until the Foundation fully documents its internal control policies and procedures for certain of its personnel practices, and updates and finalizes its draft guidance for its contracting practices, there is an increased risk that procedures in these areas may not be consistently carried out, which in turn increases the risk that (1) employees could be hired or separated improperly and applicable laws and implementing regulations may not be consistently followed and (2) the Foundation may pay for erroneous amounts billed. We recommend that the Foundation’s Executive Director take the following two actions: Fully document the Foundation’s internal control policies and procedures related to the hiring and separation of employees. Update the Foundation’s draft written policies and procedures over its contracting practices to include all key internal control activities, issue them in final form, and establish a date by which these actions will be completed. We provided a draft of this report to the Foundation for comment. In its written comments, which are reprinted in appendix II, the Foundation concurred with our recommendations and stated that it will implement the recommended actions. In addition, the Foundation stated that our recommendations will be incorporated in the Foundation’s risk assessment documentation, established as a priority, and included in the Foundation’s fiscal year 2016 Corrective Action Plan. We are sending copies of this report to the Executive Director of the Morris K. Udall and Stewart L. Udall Foundation, the Deputy Inspector General of the Department of the Interior, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9399 or malenichj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to determine the extent to which the Morris K. Udall and Stewart L. Udall Foundation (Foundation) has (1) made progress in improving its internal control environment, risk assessment, and monitoring activities; (2) designed and implemented effective internal control over certain of its personnel and contracting practices; and (3) designed and implemented effective internal control over its receipts and disbursements activity (consisting of payroll, contracting, and other expense transactions). The scope of our audit was the 9-month period July 1, 2014, through March 31, 2015. The Foundation was in the process of making significant changes to its internal control policies and procedures during the first 6 months of calendar year 2014. Therefore, we selected the 9-month period July 1, 2014, through March 31, 2015, as the period of our audit because the transactions in this time period would have been processed under more of the Foundation’s newly developed internal control policies and procedures. To determine the extent to which the Foundation has improved its internal control environment, risk assessment, and monitoring activities, we considered the criteria in the Federal Managers’ Financial Integrity Act (FMFIA); Standards for Internal Control in the Federal Government; and the implementing guidance in the Office of Management and Budget’s (OMB) Circular No. A-123, Management’s Responsibility for Internal Control, which defines management’s responsibility for internal control in federal agencies. We interviewed Foundation officials regarding the formal written internal control policies and procedures that management has developed and implemented and other actions management has taken to improve its internal control environment. We also interviewed Foundation officials about internal risk assessment and monitoring activities that management performed. In addition, we reviewed the agency financial report for fiscal year 2014 to determine management’s conclusions on the Foundation’s internal controls. Further, we interviewed Foundation officials and reviewed documentation regarding how they developed internal control assurance statements and the sources of information that provided the basis for the Foundation’s annual assessment of and report on internal control under FMFIA. To determine the extent to which the Foundation designed effective internal control over certain of its personnel and contracting practices, we interviewed Foundation officials, reviewed written internal control policies and procedures, and performed walkthroughs of specific transactions to inform our understanding of the internal control environment. We also obtained, analyzed, and summarized the Foundation’s written internal control policies and procedures related to certain of its personnel and contracting practices for the 9-month period July 1, 2014, through March 31, 2015, and compared them to the criteria in Standards for Internal Control in the Federal Government. To test the implementation of internal controls over certain of the Foundation’s personnel practices, we tested key internal control activities for all new hires, separations of employees, and outside employment transactions. 1. Hiring. For the 9-month period July 1, 2014, through March 31, 2015, we tested the key control activities for all five newly hired employees. We assessed the design of these key internal control activities by comparing them to the criteria in Standards for Internal Control in the Federal Government to determine whether these key internal control activities were consistent with federal internal control standards. We tested these key control activities by inspecting the following: management’s review of General Service Administration (GSA) position descriptions and noting that each new employee’s salary was within the range of pay in the GSA position description, management’s review and approval of the new employee’s Office of Personnel Management (OPM) request for personnel action, management’s review of monthly detailed spreadsheets attesting to the addition of the new employee, and management’s review of monthly reports attesting to the addition of the new employee’s salary. 2. Separations. For the 9-month period July 1, 2014, through March 31, 2015, we tested the key control activities for all eight separated employees. We assessed the design of these key internal control activities by comparing them to the criteria in Standards for Internal Control in the Federal Government to determine whether these key internal control activities were consistent with federal internal control standards. We tested these key control activities by inspecting the following: management’s review and approval of the separated employee’s OPM request for personnel action, management’s review of monthly detailed spreadsheets attesting to the removal of the employee, and management’s review of monthly reports attesting to the removal of the employee’s salary. 3. Outside employment. For the 9-month period July 1, 2014, through March 31, 2015, we tested the key control activities for all four employees who applied for outside employment. We assessed the design of these key internal control activities by comparing them to the criteria in Standards for Internal Control in the Federal Government to determine whether these internal control activities were consistent with federal internal control standards. We tested these key control activities by inspecting the following: the employee’s completed outside employment application; management’s review and approval of the employee’s completed outside employment application; and a legal opinion prepared by the General Counsel, which included a review of applicable statutory and regulatory provisions to ensure that the employee was in compliance with applicable laws and regulations. To test the implementation of internal controls over contracting practices, we conducted tests of key internal control activities on a randomly selected statistical sample of 37 contracting disbursement transactions, as described in the section below. To determine the extent to which the Foundation designed effective internal control over its receipts and disbursements activity (consisting of payroll, contracting, and other expense transactions), we interviewed Foundation officials, reviewed written internal control policies and procedures, and performed walkthroughs of specific transactions. We also obtained, analyzed, and summarized the Foundation’s written internal control policies and procedures related to receipts and disbursements activity for the 9-month period July 1, 2014, through March 31, 2015, and compared them to the criteria in Standards for Internal Control in the Federal Government to determine whether these key control activities were consistent with federal internal control standards. Table 1 summarizes the receipts and disbursements activity for the 9- month period July 1, 2014, to March 31, 2015, and randomly selected statistical sample sizes for each area. As noted above, we conducted internal control-related walkthroughs and interviews, all of which indicated that the designed internal controls were implemented. We incorporated assurances gained from these additional audit steps into sample size determinations to improve efficiency while maintaining effectiveness. We planned our testing of each area to be 95 percent confident that the actual error rate associated with sampling error inherent in these statistical samples is less than or equal to 6.9 percent. To test the implementation and effectiveness of internal controls over receipts and disbursements activity (consisting of payroll, contracting, and other expense transactions), we conducted transaction tests of key internal control activities as described below. 1. For the 9-month period July 1, 2014, through March 31, 2015, we selected and tested a random statistical sample of 36 receipt transactions to determine whether the Foundation’s key internal control activities over the processing of its receipts activity were implemented effectively in accordance with the Foundation’s formal written internal control policies and procedures. We tested these key control activities over the processing of receipt transactions by inspecting the following: supervisory review of project data in the Project Management Database (PMD), management review and approval of all invoices, such as invoices for training services provided to external parties in PMD, and management review of the invoices spreadsheet attesting to the total amount of invoices. 1. Payroll disbursements. For the 9-month period July 1, 2014, through March 31, 2015, we randomly selected 10 biweekly pay periods from the 9-month period of our audit and then randomly selected five transactions from each pay period, which resulted in 50 payroll disbursement transactions. We then determined whether the Foundation’s key internal control activities over the processing of its payroll disbursements activity were implemented effectively in accordance with the Foundation’s formal written internal control policies and procedures. We tested the key control activities over the processing of payroll disbursement transactions by inspecting the following: management’s review and approval of requests for personnel actions, management’s review of monthly detailed spreadsheets attesting to the correct salary for employees, supervisory review of employees’ time cards and amendments, and management review and electronic sign-off on employees’ time cards and amendments. 2. Contracting disbursements. For the 9-month period July 1, 2014, through March 31, 2015, we selected and tested a random statistical sample of 37 contracting disbursement transactions to determine whether the Foundation’s key internal control activities over the processing of its contracting disbursements activity were implemented effectively in accordance with the Foundation’s formal written internal control policies and procedures. We tested the key control activities over the processing of contracting disbursement transactions by inspecting the following: management approval of checklists attesting that no conflicts of interest exist; evidence of management review and approval of contracts; and evidence of management receipt and review of contractor invoices, including a comparison of the scope and nature of services provided and labor hours billed. 3. Other expense disbursements. For the 9-month period July 1, 2014, through March 31, 2015, we selected and tested a random statistical sample of 50 other expense transactions to determine whether the Foundation’s key internal control activities over the processing of its other expense disbursements activity were implemented effectively in accordance with the Foundation’s formal written internal control policies and procedures. We tested the key control activities over the processing of other expense disbursement transactions by inspecting the following: supervisory review of invoices; evidence of management receipt of invoices; management’s review and approval of all credit card authorization forms; and management’s review and approval of all invoices, which included evidence of receipt of services. We conducted this performance audit from April 2015 to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, William Boutboul (Assistant Director), Sharon Byrd, Francine DelVecchio, Lauren S. Fassler, Wilfred Holloway, Gail Luna, Cynthia Ma, Kevin McAloon, and Diana Vu made significant contributions to this report.
In December 2013, GAO issued a report that described the Foundation's actions to improve its internal control assessment process and its controls over personnel and contracting. A fiscal year 2015 congressional directive includes a provision for GAO to conduct a follow-up evaluation of the Foundation's internal controls. This report examines the extent to which the Foundation has (1) made progress in improving its internal control environment, risk assessment, and monitoring activities; (2) designed and implemented effective internal control over certain of its personnel and contracting practices; and (3) designed and implemented effective internal control over receipts and disbursements activity. For the 9-month period July 1, 2014, through March 31, 2015, GAO reviewed relevant Foundation documents, interviewed Foundation management, reviewed key processes, performed observations, and tested transactions for key internal control activities. The Morris K. Udall and Stewart L. Udall Foundation (Foundation), an executive branch agency, provides educational opportunities related to environmental policy and Native American health care and tribal policy and assists in resolving environmental disputes that involve federal agencies. Since GAO's 2013 report, the Foundation has made significant improvements to several key areas as detailed below. The Foundation also has made significant progress designing and implementing internal control activities over certain of its personnel and contracting practices. However, the Foundation has not, as called for under federal internal control standards, (1) fully documented its policies and procedures related to the hiring and separation of employees and (2) updated and finalized its policies and procedures over its contracting practices to include all key internal control activities, such as providing evidence of management's receipt and review of contractors' invoices. Foundation officials stated that they had not yet performed these actions as they have focused their efforts during fiscal year 2015 on developing formal written internal control policies and procedures in areas that were previously identified by Foundation management as significant risks. However, the Foundation's Corrective Action Plan does not include steps to document hiring and separation policies and does not have a completion date for finalizing its contracting policies. Until this is addressed, there is an increased risk that procedures in these areas may not be consistently carried out, which in turn increases the risk that (1) employees could be hired or separated improperly and applicable laws and implementing regulations may not be consistently followed and (2) the Foundation may pay for erroneous amounts billed. The Foundation also effectively designed and implemented internal controls over its receipts and disbursements activity. The Foundation designed formal written internal control policies and procedures in these areas consistent with federal internal control standards. Based on tests of randomly selected statistical samples of transactions, GAO found that the Foundation effectively implemented key internal control activities over the processing of its receipts and disbursements activity. GAO recommends that the Foundation (1) fully document its internal control policies and procedures related to the hiring and separation of employees and (2) update its draft written policies and procedures over its contracting practices to include all key internal control activities, issue them in final form, and establish a date for completion. In commenting on a draft of this report, the Foundation concurred with the recommendations and stated that they will be included in its 2016 Corrective Action Plan.
The need for strong leadership and a governmentwide strategic view of information management has long been recognized as critical. Along with establishing a single policy framework for federal management of information resources and formalizing the institutionalization of IRM as the approach governing information activities, the Paperwork Reduction Act (PRA) in 1980 created OIRA to develop IRM policy and oversee its implementation, at the same time giving it oversight responsibilities in specific IRM functional areas. The OIRA administrator is also to serve as the principal adviser to the director of OMB on IRM policy. The Clinger- Cohen Act of 1996 amended PRA to also give OIRA, through the director, significant leadership responsibilities in supporting agencies’ actions to improve their IT management practices. In addition to these statutory responsibilities, OIRA is responsible for providing overall leadership of executive branch regulatory activities. OIRA also reviews significant new regulations issued by executive departments and agencies (other than independent regulatory agencies) before they are published in the Federal Register. In calendar year 2000, OIRA staff reviewed approximately 2,900 proposed and 4,500 final rules. OIRA is organized into five branches: Information Policy and Technology Management, Statistical Policy, Commerce and Lands, Human Resources and Housing, and Natural Resources. Information Policy and Technology is responsible for information dissemination, records management, privacy and security, and IT. Statistical Policy, headed by the chief statistician, is responsible for the statistical policy and coordination requirements contained in the act. Desk officers in Commerce and Lands, Human Resources and Housing, and Natural Resources are responsible for information collection and regulatory review and related issues for specific agencies in a matrixed fashion, in consultation with relevant OIRA branches as well as the budget side of OMB. As of December 31, 2001, OIRA had a total of 51 full-time equivalent (FTE) staff assigned to the five branches: Information Policy and Technology Management (12 FTEs), Statistical Policy (6), Commerce and Lands (8), Human Resources and Housing (9), and Natural Resources (9). The OIRA Records Management Center accounted for one additional position; the Office of the OIRA Administrator accounted for the remaining six positions. OIRA has been allotted and is in the process of filling 5 additional slots. Two other entities perform PRA-related activities. First, the Chief Information Officers (CIO) Council was established by executive order in July 1996 as the principal interagency forum for improving agency IRM practices. For example, the Council is to make recommendations for overall IT management policy, procedures, and standards, and to provide advice to OMB on the development of the governmentwide strategic IRM plan required by PRA. The Council is composed of the CIOs and deputy CIOs from 28 federal agencies, plus senior officials from OMB. Second, last June OMB established the position of associate director for information technology and e-government. This individual is responsible for (1) working to further the administration’s goal of using the Internet to create a citizen-centric government; (2) ensuring that the federal government takes maximum advantage of technology and best practices to improve quality, effectiveness, and efficiency; and (3) leading the development and implementation of federal IT policy. In addition, the associate director is responsible for (1) overseeing implementation of IT throughout the federal government, (2) working with the deputy director for management—also described by OMB as the federal CIO—to perform a variety of oversight functions statutorily assigned to OMB, and (3) directing the activities of the CIO Council. We have previously reported on OIRA’s efforts to respond to the PRA requirements for a governmentwide strategic plan. In 1998, we reported that none of the various reports OIRA had designated since 1995 as being the strategic IRM plan clearly discussed the objectives and means by which the federal government would use all types of information resources to improve agency and program performance—a key PRA requirement. A Broad, Governmentwide Recent events have highlighted information as not only an asset but a Perspective: More Imperative Than Ever critical tool, essential to achieving the fundamental purposes of government. In the aftermath of the attacks of the past few months, agencies have clearly struggled with issues concerning intelligence gathering, information sharing and dissemination, security, and critical information technology infrastructure. For example: Our September 2001 combating terrorism report highlighted that the growing threat of terrorism presented evolving challenges to the existing framework for leadership and coordination. We reported that the interagency and intergovernmental nature of programs to combat terrorism make it important that certain overall leadership and coordination functions be performed above the level of individual agencies. Accordingly, we recommended that the President appoint a single focal point with responsibility for overall leadership and coordination, including the development of a national strategy. The president subsequently appointed former governor Tom Ridge as the new director of homeland security, responsible for coordinating federal, state, and local actions and for leading and overseeing such a comprehensive approach to safeguarding the nation against terrorism. The successful formulation of such a comprehensive strategy will require development of one overall plan for the collection and analysis of information relating to terrorist activities or threats across the United States, and the securing of IT systems to facilitate the sharing of this information among the many entities involved. That same report also addressed the need to protect critical federal systems from computer-based attacks. As we reported, while an array of activities had been undertaken to implement a national strategy to mitigate risks to computer systems and the critical operations and infrastructures they support, progress in certain key areas had been slow. Specifically, agencies had taken steps to develop critical infrastructure protection plans, but independent audits continue to identify persistent, significant information security weaknesses that place federal operations at risk. Further, while outreach efforts by numerous federal entities to establish cooperative relationships with and among private and other nonfederal organizations had raised awareness and prompted information sharing, substantive analysis of sector-wide and cross-sector interdependencies and vulnerabilities had been limited. We recommended that the federal government’s critical infrastructure protection strategy, which was under review at the time of our report, define (1) specific roles and responsibilities, (2) objectives, milestones, and an action plan, and (3) performance measures. The recent attacks have also highlighted the need for immigration, law enforcement, intelligence, and defense and foreign policy agencies to better share information on domestic and international terrorists and criminals. Concerns have been raised that the various databases and information systems containing this information may not be sufficiently linked to ensure that all levels of government have complete and accurate information. Recent events have also reemphasized the importance of ongoing efforts to improve the public health infrastructure that detects disease outbreaks, identifies sources and modes of transmission, and performs laboratory identification. According to the Centers for Disease Control and Prevention (CDC), the ability to share information on potential threats and remedial actions, and exchange data on newly identified disease outbreaks, is critical to our defense against bioterrorism. However, we, CDC, and others have identified deficiencies in the information systems and telecommunications capabilities at the local, state, and national levels that hinder effective bioterrorism identification and response. For example, in March 2001, CDC recommended that all health departments have continuous, high-speed access to the Internet and standards for data collection, transport, electronic reporting, and information exchange that protect privacy and seamlessly connect local, state, and federal data systems. In recent testimony, CDC emphasized that since September 11 it has accelerated its efforts to work with state and local health agencies, share critical lessons learned, and identify priority areas for immediate strengthening. Beyond the recent terrorist acts, emerging trends also make clear the importance of information resources to government, and the need for a strategic approach. One such trend is the continuing shift from an industrial to a knowledge-based and global economy in which knowledge becomes the main driver of value and creation of wealth. One characteristic of a knowledge-based economy is a higher set of public expectations about government performance and accountability. In addition, the knowledge-based economy presents complex issues that require input from multiple institutions at different levels of government and within the private and nonprofit sectors. To address these challenges, government needs processes and structures that embrace long-term, cross-issue, strategic thinking. Understanding and developing these new processes will require active use and exchange of knowledge and information that are relevant, timely, accurate, valid, reliable, and accessible. The administration has also recognized the need to improve government performance and, as a result, has established an ambitious agenda that is dependent on effective management of information resources. One of the governmentwide goals in The President’s Management Agenda for Fiscal Year 2002 is to expand e-government to provide high-quality service to citizens at reduced cost, make government services more accessible, and increase government transparency and accountability. To accomplish this, the administration plans to support projects that offer performance gains across agency boundaries, such as the development of a Web-based portal that will allow citizens to apply for federal grants on-line. Making this strategy successful will require the government to address such challenges as implementing appropriate security controls, protecting personal privacy, and maintaining electronic records. Given the changing environment in which the need for a performance- based federal approach to managing the government’s information resources is of paramount importance, strategic planning provides an essential foundation. It defines what an organization seeks to accomplish, identifies the strategies it will use to achieve desired results, and then determines—through measurement—how well it is succeeding in reaching results-oriented goals and achieving objectives. An important element of a strategic plan is that it presents an integrated system of high-level decisions that are reached through a formal, visible process. The plan is thus an effective tool with which to communicate the mission and direction to stakeholders. However, the CIO Council plan that was prepared to respond to the requirements of the PRA is not an effective and comprehensive governmentwide plan. Specifically, the plan’s governmentwide goals (1) are not linked to expected improvements in agency and program performance and (2) do not comprehensively address IRM. In addition, strategies for reaching the goals are incomplete. Additional documents that OIRA cited as supplementing the CIO plan do not address the weaknesses we identified. As a result, agencies are left to address information needs in isolation without a comprehensive vision to unify their efforts. Further, the risk is increased that current and emerging IRM challenges will not be met. Over the past 20 years, the Congress has put in place a statutory framework to improve the performance and accountability of executive agencies and to enhance executive branch and congressional decisionmaking. Results-oriented management legislation, coupled with legislation reforming IT, has enabled substantial progress in establishing the basic infrastructure needed to create high-performing federal organizations. PRA requires OIRA to develop and maintain a governmentwide strategic IRM plan to describe how the federal government will apply information resources to improve agency and program performance. Specifically, this strategic plan was intended to provide a comprehensive vision for the future of IRM in government, and would establish governmentwide goals for using information resources to improve agency and program performance, and describe the strategies, including resources needed, to accomplish these goals. PRA further stipulates that the strategic IRM plan must include (1) plans for enhancing public access to and dissemination of information using electronic and other formats; (2) plans for meeting the information technology needs of the government; (3) plans for reducing information burdens and meeting shared data needs with shared resources; and (4) a description of progress in applying IRM to improving agency mission performance. The plan is also to be developed in consultation with the archivist of the United States, the administrator of general services, the director of the National Institute of Standards and Technology, and the director of the Office of Personnel Management. Since 1998, OIRA’s response to the PRA mandate for a strategic plan has been to jointly publish a strategic plan with the CIO Council. The most recent plan, the CIO Council Strategic Plan for Fiscal Years 2001-2002, was issued in October 2000. The development of this plan was the result of extensive discussion, both internally with agency CIOs and with some external stakeholders, such as state and IT industry CIOs. The CIO Council plan articulates a vision that was used to guide the plan’s goals and objectives: Better government through better use of information, people, processes, and technology. The plan reflects the Council’s view of critical, cross-cutting IT issues that are affecting the federal government’s ability to serve its citizens. It also provides background and rationale for the issues, and a brief description of the Council’s past accomplishments in each area. For fiscal years 2001–2002, the Council identified six themes that frame the specific goals that accompany them. These goals are as follows: Connect all citizens to the products, services, and information of their government. Develop interoperable and innovative governmentwide IT initiatives. Implement a secure and reliable information infrastructure that the customer can access and trust. Develop IT skills and resources to meet mission objectives. Collaborate between the public and private sectors to achieve better government. Develop investment management policies, practices, and tools that enable improved delivery of government programs and services. Each goal has a set of associated objectives or major actions needed. A total of 88 detailed initiatives are provided, representing specific, concrete actions that the Council can take to implement its objectives. While a robust document for the Council, this plan does not constitute an effective governmentwide strategic IRM plan under PRA. First, although the plan establishes a number of goals that are clearly governmentwide in nature, these goals are not linked to expected improvements in agency and program performance. For example, the plan contains a governmentwide goal of interoperable and innovative IT initiatives; however, the plan does not discuss how these initiatives will improve agency performance or establish targets for improvement. Further, the plan’s goals do not address IRM comprehensively; for example, statistical activities, records management, and the collection and control of paperwork are not addressed. Second, while the plan contains strategies for reaching the goals, these strategies are incomplete. Specifically, the plan does not address, even at a high level, OIRA’s policymaking and oversight role in helping to attain those goals. Further, the plan does not discuss the resources needed governmentwide—by OIRA, the CIO Council, and federal agencies—to achieve its goals. Finally, the plan addresses some but not all of the remaining items highlighted in PRA. Specifically: The plan does address enhancing public access to and dissemination of information. The first goal—connecting all citizens to the products, services, and information of their government—is focused on making government information accessible and facilitating transactions with citizens. Strategies to accomplish this goal included developing the FirstGov.gov portal for government services. The plan includes a discussion of meeting the IT needs of the government. Specifically, goal six focuses on IT investment management practices and tools to improve delivery of government services and programs. Strategies include improving the quality of data used to support investment decisionmaking, information technology acquisition strategies, and IT performance measurement. It does not address reducing the information burden to the public. While it includes goals and strategies that may ultimately result in burden reduction—such as creating interoperable and innovative governmentwide initiatives—they are not linked to burden reduction. The plan also does not include a discussion of meeting shared data needs with shared resources, as required by the act. Notably lacking in the plan is any description of progress already made in applying IRM principles to improving agency performance and mission accomplishment. Further, the plan’s performance measures are not geared toward providing the required information on progress. These measures are solely focused on gauging Council progress in meeting the goals, rather than on progress in improving agency and program performance. In regard to the consultations required by PRA, representatives of key agencies currently sit on the Council and, thus, participated in the development of the plan, according to OIRA and CIO Council officials. OMB officials also indicated that by conducting meetings with these agencies, and through other guidance and review activities, the strategic viewpoint of these senior officials was captured. In discussing our views of the CIO Council plan, OMB officials responded that while the CIO Council plan is OIRA’s primary means of complying with the strategic planning requirements under PRA, OMB produces a range of other documents that also contain elements of the governmentwide plan. It is this collection of documents, as a whole, that constitutes the governmentwide strategic IRM plan under PRA. According to OMB officials, these additional documents are as follows: Government Information Security Reform Act. Under this act, agencies are required to report to OMB annually on independent evaluations of their information security programs. OMB is then required to summarize these reports; OMB officials said that this summary provides strategic direction for the security area. Agencies reported to OMB in September 2001; OMB issued the governmentwide summary on February 13, 2002. Budget Information. OMB officials cited two budget documents that provide governmentwide strategic direction. According to these officials, Table 22-1 in the budget sets strategic direction for IT and e-government and discusses agency performance. In addition, these officials stated that the exhibit 53s, submitted by agencies as part of the budget process, provide specific performance information on planned spending for major and significant information systems. In addition, the chief statistician cited the annual OMB report, Statistical Programs of the United States Government, which describes proposed funding and priority activities for federal statistics. Plans Under the Government Paperwork Elimination Act. Under this act, agencies are required to report to OMB on their plans for providing the public with the option of submitting, maintaining, and disclosing required information electronically, instead of on paper. OIRA has summarized these plans in a database which, according to OIRA, provides part of the strategic direction for IRM. In September 2001, we reported on the status of agency implementation of the act. We found that although agency implementation plans submitted in October 2000 included much potentially useful information, many omissions and inconsistencies were evident. In addition, we noted that the plans did not provide sufficient information regarding agencies’ strategic actions that could minimize the risk of not meeting the deadline for providing electronic options. We concluded that given these shortcomings, OMB would be challenged in its oversight role of ensuring that agencies comply with the act. In commenting on this report, OMB officials noted that in October 2001, they collected additional information from agencies to address these issues; we did not review this additional information. The Information Collection Budget. Each year, OIRA publishes an Information Collection Budget by gathering data from executive branch agencies on the total number of burden hours OIRA approved for collection of information at the end of the fiscal year, and agency estimates of the burden for the coming fiscal year. This document includes a governmentwide goal for burden reduction and reports the reasons for any increasing burden. It also highlights agency efforts to streamline and reduce information collections from the public for the upcoming fiscal year. The National Archives and Records Administration (NARA) Strategic Plan. OMB officials stated that this plan provides a strategy for how NARA plans to fulfill its mission and that agency records managers regard this plan as providing strategic direction for their own activities. The President’s Management Agenda. Again, according to OMB officials, the e-government goal contained in the president’s management agenda provides a strategic vision for expanding the use of e-government. According to OMB officials, this will soon be supplemented by a report specifically on the e-government initiative, which will further address strategic direction for e-government. These documents—whether viewed individually or in total—do not address the weaknesses we have identified. Of these documents, one report stands out as governmentwide and strategic—the president’s management agenda, which articulates the goal of expanding e- government as well as strategies for accomplishing that goal. Although this agenda adds additional perspective on the administration’s strategic direction for certain aspects of IRM, it is not broad enough to compensate for the weaknesses in the CIO Council plan. In addition, the current NARA strategic plan for fiscal years 1997–2007 includes no governmentwide goals and strategies for records management. Rather, NARA’s articulated goals and strategies focus on the mission of the agency: providing ready access to information that documents citizens’ rights, officials’ actions, and the national experience. The remaining documents deal with various aspects of the government’s use of information resources, but are not strategic or focused on the future, and do not provide goals, strategies, and performance measures. Further, the multitude of documents—issued at different points in time— that OIRA indicated comprise the governmentwide plan are neither integrated nor formalized in any way. Nor is there any published tool to identify and locate these documents, should agencies, the Congress, or other stakeholders want to view the plan in its totality. As a result, these documents do not clearly communicate the strategic IRM vision of the government. The shortcomings we have identified in the current plan indicate that OIRA has not devoted sufficient attention to producing an effective governmentwide strategic IRM plan. As a result, agencies are left to address information needs in isolation without a comprehensive vision to unify their efforts. Further, the risk is increased that investments in IT will not be leveraged across the government; that duplicative initiatives will be undertaken; that opportunities for data sharing and public access will be missed; that privacy will be compromised; and that the security of information, information systems, and critical infrastructure will be jeopardized. Without OIRA leadership, top-level management commitment, and the application of appropriate resources to ensure the development of a comprehensive and meaningful plan, the mounting challenges that the government faces in managing information may not be met. While the CIO Council’s strategic plan does not effectively serve as the governmentwide vehicle envisioned under PRA, OIRA is responding to other PRA policymaking, oversight, and functional requirements. OIRA officials see themselves as having provided leadership in IRM, and point to the successful resolution of the Year 2000 problem as among OMB’s greatest accomplishments over the last 5 years. They also cite the establishment of FirstGov.gov as a major accomplishment. We agree that these are significant. In fact, our work on the Year 2000 issue specifically acknowledged the important role that OMB played in leading, coordinating, and monitoring federal activity. And in 2000 we testified that FirstGov.gov represented an important, previously unavailable capability that was rapidly and successfully put into place. Regarding the development of general IRM policy, OIRA officials said that they see policymaking as a primary responsibility. OIRA most recently updated Circular A-130, Management of Federal Information Resources in November 2000 to incorporate changes resulting from the Clinger-Cohen Act of 1996 and subsequent policies outlined in OMB Circular A-11. This version of Circular A-130 specifically incorporates the requirements that agencies focus IRM planning to support their strategic missions, implement a capital planning and investment control process that links to budget formulation and execution, and rethink and restructure their business processes before investing in information technology. In terms of oversight, according to OIRA officials, they leverage existing statutory processes, including reviews of the budget, proposed agency information collections, regulations, legislation, and systems of recordsunder the Privacy Act to oversee agency IRM activities. Additionally, they noted that they work with agency CIOs through the budget process, Government Performance and Results Act reporting, and information- collection reviews to further policy oversight. OIRA officials also emphasized their work with the CIO Council and other interagency groups as a means of overseeing agency activities. They stressed that OMB is not an audit organization, and that A-130 requires agencies to monitor their own compliance with IRM policies, procedures, and guidance. OIRA has also taken action to respond to the specific IRM functional responsibilities in PRA: information collection, dissemination, statistical policy and coordination, records management, privacy and security, and IT. Since 1995, OMB has issued guidance in each of these areas including on such topics as Internet privacy, dissemination, and information technology. In addition, it has responded to specific requirements by reviewing and approving proposed agency information collections, appointing a chief statistician to coordinate statistical activities, seeking statutory authority to expand data sharing among statistical agencies, and working with the CIO Council to improve IT management. The full range of these actions are recounted in appendix II. Our past work demonstrates, however, that OIRA faces continuing and new challenges in each of these areas. For example: Information Collection/Burden Reduction. Over the past 3 years, we have reported that federal paperwork has continued to increase. For example, in April 2001, we reported that paperwork had increased by nearly 180 million burden hours during fiscal year 2000—the second largest 1-year increase since the act was passed. This increase was largely attributable to the Internal Revenue Service, which raised its paperwork estimate by about 240 million burden hours. We also reported that PRA violations—in which information-collection authorizations from OMB had expired or were otherwise inconsistent with the act’s provisions—had declined from 710 to 487, but were still a serious problem. We concluded that while OIRA had taken some steps to limit violations, more needed to be done, including taking steps to work with the budget side of OMB to bring agencies into compliance. In commenting on this report, OMB officials noted that in November 2001, the OIRA administrator and OMB general counsel sent a memorandum to agencies stressing the importance of having agencies eliminate existing violations and prevent new ones. Information Dissemination. Two recent reports underscored the evolving nature of information dissemination issues and the challenges that the government faces in moving toward increased electronic dissemination of information. One on the National Technical Information Service (NTIS)—a repository for scientific and technical information— stated that rising demand for electronic products, coupled with increasing availability of this information on the Internet, raised fundamental issues about how the information should be collected, stored, and disseminated—and specifically, about the future of NTIS itself.Specifically, we raised policy questions concerning whether a central repository was still needed and if so, how it should be structured. In addition, our report on the Government Printing Office—which prints and disseminates publications for all three branches of government— concluded that while electronic dissemination of government publications provided an attractive alternative to paper, a number of challenges would need to be overcome if the government were to increase electronic dissemination. These challenges included ensuring permanence, equitable access, and authenticity of documents in an electronic environment. Statistical Policy. In March 1998, in testimony on a reorganization proposal involving part of the federal statistical system, we summarized our past work in this area. We concluded that the inability of agencies to share data is one of the most significant issues facing the statistical system, and one of the major factors affecting the quality of data, the efficiency of the system, and the amount of burden placed on those who provide information to the agencies. Records Management. Last July we testified that the management of electronic records was a substantial challenge facing the government and the National Archives and Records Administration in implementing the Government Paperwork Elimination Act and in moving toward e- government. We underscored the need for strong, central leadership to overcome this challenge. Privacy. In September 2000, we reported that most Web sites we reviewed had posted privacy policies but had not consistently posted policies on pages we identified as collecting substantial amounts of personal information. We concluded that OMB’s guidance was unclear in several respects, and contained undefined language. And last April we reported on agency use of Internet “cookies” and concluded that OMB’s guidance left agencies to implement fragmented directives contained in multiple documents. Further, the guidance itself was not clear on the disclosure requirements for a certain type of cookie. Information Technology. In last January’s Performance and Accountability Series of reports, we identified information technology management—including improving the collection, use, and dissemination of government information; strengthening computer security; and strengthening IT management processes—as a major management challenge facing the federal government. We pointed out that the momentum generated by the government’s response to the Year 2000 change should not be lost, and that the lessons learned should be considered in addressing other pressing challenges. The report further reemphasized the need for sustained and focused central leadership, and particularly for a federal chief information officer to provide strong focus and attention to the full range of IRM and IT issues. Information Security. Since 1997, we have designated information security as a high-risk area because growing evidence indicated that controls over computerized federal operations were not effective and related risks were escalating, in part due to increasing reliance on the Internet. While many actions have been taken, current activity is not keeping pace with the growing threat. In recent testimony, we reported that our most recent analyses of audit reports published from July 2000 through September 2001, continued to show significant weaknesses at each of the 24 agencies included in our review. Consequently, critical operations, assets, and sensitive information gathered from the public and other sources continued to be vulnerable to disruption, data tampering, fraud, and inappropriate disclosure. While recognizing that the administration had taken a number of positive steps to protect critical public and private information systems, we concluded that the government still faced a challenge in ensuring that risks from cyber threats are appropriately addressed in the context of the broader array of risks to the nation’s welfare. Further, we recommended that the federal government’s strategy for protecting these systems define (1) specific roles and responsibilities, (2) objectives, milestones, and an action plan, and (3) performance measures. Over the years, we have made numerous recommendations to both OMB and the agencies on IRM matters. While actions have been taken to respond to our recommendations, more needs to be done. Some of the more significant recommendations involving OIRA that have not yet been implemented include the following: In 1996, in reporting on Clinger-Cohen Act implementation, we recommended that OMB identify the type and amount of skills required for OMB to execute IT portfolio analyses; determine the degree to which these needs are currently satisfied; specify the gap; and design and implement a plan to close the gap. Although OIRA officials said they are examining their staffing needs, no systematic review has been conducted to date. In the same 1996 report, we recommended that OMB evaluate information system project cost, benefit, and risk data when analyzing the results of agency IT investments. Such analyses should produce agency track records that clearly and definitively show what improvements in mission performance have been achieved for the IT dollars expended. Although OMB has provided anecdotal evidence of expected and actual mission performance improvements for some major systems projects, it is not clear that OMB has constructed or plans to construct systematic agency track records. In 1998, in a report on OIRA’s implementation of PRA, we recommended that OMB ensure that its annual performance plan and program reports to the Congress under the Government Performance and Results Act identify specific strategies, resources, and performance measures that it will use to address OIRA’s PRA responsibilities. OMB has not acted on this recommendation. In 2000, in a report on Internet privacy, we recommended that OMB (1) consider how best to help agencies better ensure that individuals are provided clear and adequate notice about how their personal information is treated when they visit federal Web sites, and (2) determine whether current oversight strategies are adequate. In addition, in reporting on federal agency use of Internet cookies, we recommended that OMB unify its guidance on Web site privacy policies and clarify the resulting guidance to provide comprehensive direction on the use of cookies by federal agencies on their Web sites. Although OIRA officials said that they plan to launch a privacy initiative to address these recommendations, no action has been taken to date. Current and emerging challenges—including the events of September 11 and the subsequent anthrax attacks—emphasize the importance of the integrated approach that IRM embodies and the need for a strategic plan to guide the government’s management of its increasingly valuable information resources. However, OIRA has not established an effective governmentwide strategic IRM plan to accomplish this. Given the magnitude of the changes that have occurred since the CIO Council plan was published in October 2000, OIRA has both an obligation and an opportunity to lead the development of a unified governmentwide plan that communicates a clear and comprehensive vision for how the government will use information resources to improve agency performance, is responsive to the current external environment including the impact of recent terrorist attacks and other trends, recognizes the resources including human capital needed to achieve governmentwide IRM goals, and reflects consultation with all stakeholders—including the Office of Homeland Security, entities involved in information security and critical infrastructure protection, and the officials identified in the act—who are critical to meeting IRM challenges and the goals the administration has established in its management agenda. The shortcomings we identified in the CIO Council plan call into question the degree of management attention that OIRA has devoted thus far to producing the governmentwide plan. Without such a plan, OIRA and the agencies lack a unifying governmentwide vision for how investments in and use of information resources will facilitate the current and emerging agenda of the federal government. Further, the risk is increased that investments in IT will not be leveraged across the government; that duplicative initiatives will be undertaken; that opportunities for data sharing and public access will be missed; that privacy will be compromised; and that the security of information, information systems, and critical infrastructure will be jeopardized. Without OIRA leadership, top-level management commitment, and the application of appropriate resources to ensure the development of a comprehensive and meaningful plan, the mounting challenges that the government faces in managing information may not be met. While OIRA has not yet established an effective governmentwide IRM plan, it has taken action to respond to other PRA policymaking, oversight, and functional requirements. Nevertheless, OIRA faces challenges in managing critical information resources and many of the recommendations we have made over the years have not yet been implemented. In order to address the current and emerging challenges that the government faces in managing information resources and take advantage of opportunities for improvement, we recommend that the administrator, OIRA, develop and implement a governmentwide strategic IRM plan that articulates a comprehensive federal vision and plan for all aspects of government information. In addition, recognizing the new emphasis that OMB has placed on e-government, it will be important that the administrator work in conjunction with the associate director for technology and e-government in developing this plan. In particular, the following actions should be taken: Consistent with the Paperwork Reduction Act, establish governmentwide goals for IRM that are linked to improvements in agency and program performance, identify strategies for achieving the goals that clearly define the roles of OIRA and agencies, and develop performance measures to assess progress in using IRM to improve agency and program performance. Assess the external environment and emerging future challenges and trends, including the recent terrorist attacks, and their impact on the government’s collection, use, maintenance, and dissemination of information. As part of an assessment of the government’s internal environment, determine the resources, including human capital, needed to meet governmentwide IRM goals. This should include an assessment of OIRA’s human capital capability, including the numbers of staff and types of skills needed, to conduct this strategic planning process and lead governmentwide implementation of the resulting plan. Based on this assessment, the administrator, OIRA, should seek to fill any gaps identified. Seek involvement in the planning processes from the CIO Council, the Office of Homeland Security, entities involved in information security and critical infrastructure protection, federal agencies, private-sector organizations, state and local governments, and other relevant stakeholders in meeting the government’s needs for a strong and unified information management vision. In written comments on a draft of this report, which are reprinted in appendix III, the director, OMB, recognized that our report had significant implications for agency PRA implementation but expressed several concerns with its contents. First, he expressed concern that the report narrowly focuses on the finding that a governmentwide strategic plan must be a single document. He reiterated OMB’s position that the documents they cited during our review—the CIO Council Strategic Plan, the information collection budget, the president’s management agenda, and others—and the president’s budget for 2003, which was released after our draft report was sent for comment—in total meet the requirements for a governmentwide strategic IRM plan and provide adequate strategic direction to agencies. Second, the director expressed concern that the report does not incorporate the role of the associate director for information technology and e-government into its findings or analysis. The director stated that, in leading implementation of the e-government strategy outlined in the president’s management agenda, the associate director provides strategic direction to agencies for many of the functions in PRA, including information security, privacy, e-government, IT spending, enterprise architecture, and capital planning, and leads the work of OIRA and other OMB offices to improve agency performance on these issues. Lastly, the director stated that the report does not analyze the impact of OMB’s policies and practices—established in response to the requirements of PRA and other IRM statutes—on agency performance. He further stated that such an analysis would demonstrate that the president’s e-government initiative and other actions are highly effective in carrying out the purposes of PRA. We disagree with the director’s statement that our report narrowly focuses on the requirement for a strategic plan to be a single document. We performed a rigorous analysis of the documents cited by OMB during our review and compared their contents against the requirements of the PRA. Our primary finding was that these documents do not, separately or collectively, meet the requirements for a governmentwide plan. As discussed in our report, we acknowledge the strategic elements of the CIO Council plan and the president’s management agenda but found that these documents do not comprehensively cover IRM issues and are missing other key elements of a strategic IRM plan. The remaining documents cited by OMB are not strategic or focused on the future, and do not provide goals, strategies, and performance measures. Further, we think there is value in crafting a single plan—not only because it is required by PRA but also because it would provide a vehicle for clearly communicating an integrated strategic IRM vision to agencies, the Congress, and the public. However, contrary to what OMB’s letter implies, we do not believe that OMB must necessarily produce an entirely new document to accomplish this. OMB has options for building on past efforts—including the CIO council strategic plan, the president’s management agenda, and the president’s budget for 2003—to develop a plan that contains a comprehensive strategic statement of goals and resources. Regarding the budget for 2003—released after our draft report was sent for comment—this document identifies e-government and IT management reform as administration priorities. Specifically, it contains (1) a description of IT management issues including duplicative IT investments and the failure of IT investments to significantly improve agency performance, (2) additional information on the administration’s e- government goals and strategies and high-level descriptions of specific e- government initiatives, (3) descriptions of agency progress in developing capital planning and investment control processes, enterprise architectures, and business cases for IT projects, and in implementing e- government, and (4) identifies process improvement milestones for calendar year 2002. The budget also contains a scorecard used to grade agency progress in the five governmentwide initiatives—including e-government—described in the president’s management agenda. In addition, for major IT investments, the budget identifies total investments for 2001 through 2003, links each investment to the agency’s strategic goals, and provides performance goals and measures for these projects. The budget also contains a discussion on strengthening federal statistics and identifies four programs supported by the budget that are intended to address shortcomings in the statistical infrastructure. Our preliminary analysis indicates that this budget contains many of the elements required in a strategic plan that were not present in previous documents cited by OMB and, when viewed in conjunction with the president’s management agenda, represents credible progress toward developing a governmentwide plan. Specifically, it includes a discussion— within the context of e-government—of how the government will use information resources to improve agency performance, and identifies goals and strategies. It also discusses other required elements, including (1) enhancing public access to and dissemination of information and (2) meeting the IT needs of the government, and cites the need to reduce reporting burden on businesses and share data among federal agencies. Further, it provides the status of agency-by-agency progress in establishing IT management processes and implementing e-government and the scorecard provides a means of measuring agency progress. The discussion also links improving information sharing among levels of government to providing for homeland security. However, some of the areas that the budget does not appear to address include (1) the role of OIRA and the CIO Council in implementing the government’s strategies, (2) an assessment of the long-term resources (beyond fiscal year 2003)—including human capital—needed to meet the goals, and (3) how key stakeholders were involved in developing these plans. Nevertheless, based on a preliminary review of this document, it appears to address, in part, the recommendations in this report. We intend to follow up on this and other documents that OMB has indicated are forthcoming to determine the extent to which our recommendations are fully addressed. We acknowledge the role that OMB has given to the associate director to provide strategic direction to agencies and we support additional efforts to focus attention on IRM matters, especially given the magnitude of the government’s challenges. However, we believe that a governmentwide strategic IRM plan is nonetheless needed to communicate an integrated IRM vision to the Congress and other key stakeholders, as well as federal agencies. As a result, we have modified our recommendations to recognize the importance of the administrator’s working in conjunction with the associate director to articulate a comprehensive IRM vision and develop a governmentwide plan that meets PRA requirements. Finally, we acknowledge that we did not assess the impact of OIRA’s policymaking and oversight efforts—performed in response to the requirements of the PRA and other IRM legislation—on agency performance. However, our past work, referenced in this report, provides ample evidence of agency performance problems in such areas as IT management, security, privacy, and data sharing and confirms that OMB faces significant and continuing challenges in these area. Further, as discussed in our report, our past work led to our identifying information security as a governmentwide high-risk area and IT management as a major management challenge. In fact, OMB identifies some of these same performance problems in its budget for 2003 and in its related assessments of agency progress in expanding e-government. In addition, we note that the president’s e-government initiative is clearly in its early stages; any efforts to evaluate its impact on agency performance at this time would be premature. The deputy administrator, OIRA, and other officials also separately provided oral technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will provide copies to the ranking minority member, Senate Committee on Governmental Affairs; the chairman and ranking minority member, House Committee on Government Reform; and the director, Office of Management and Budget. Copies will also be available on our Web site at www.gao.gov. If you have any questions, please contact me at (202) 512-6240 or Patricia D. Fletcher, assistant director, at (202) 512-4071. We can also be reached by e-mail at koontzl@gao.gov and fletcherp@gao.gov, respectively. Key contributors to this report were Michael P. Fruitman, Ona M. Noble, Robert P. Parker, Colleen M. Phillips, and David F. Plocher. To evaluate the adequacy of OIRA’s strategic planning efforts, we performed a content analysis of the Federal Chief Information Officers (CIO) Council Strategic Plan for fiscal years 2001–2002—which OIRA officials identified as the governmentwide IRM plan—and compared it with specific PRA requirements (S 3505 A). We also interviewed OIRA and CIO Council officials to obtain information on the plan’s preparation. We reviewed our prior reports for information on evaluations and recommendations made for previous OIRA governmentwide strategic IRM plans. Further, to understand the challenges the government faces in managing information in today’s environment, we reviewed our more recent reports on terrorism, bioterrorism, and homeland security issues. In addition, we reviewed The President’s Management Agenda for Fiscal Year 2002. We also reviewed additional documents that, according to OIRA, also comprise the governmentwide IRM plan. These included the 1997-2007 Strategic Plan of the National Archives and Records Administration, OMB’s Information Collection Budget, the exhibit 53s and table 22-1 in the president’s budget for fiscal year 2002, and OMB’s Statistical Programs of the United States Government. We also reviewed OMB memoranda to agencies entitled Procedures and Guidance on Implementing the Government Paperwork Elimination Act (April 25, 2000), Guidance for Preparing and Submitting Security Plans of Action and Milestones (October 17, 2001), and Implementation of the President’s Management Agenda and Presentation of the Fiscal Year 2003 Budget Request (October 30, 2001). Finally, we reviewed the president’s budget for fiscal year 2003 after it was released on February 4, 2002. To determine OIRA actions to respond to specific IRM functional requirements, we reviewed OMB circulars, bulletins, memoranda, and other documents. In addition, we interviewed OIRA officials responsible for each of the functional areas. We reviewed our prior work on this subject, and assessed OIRA’s status regarding outstanding recommendations. We focused primarily on actions taken by OIRA since 1995, the date of the most recent PRA amendments. However, we did not assess the adequacy of OIRA’s actions to respond to these requirements. OIRA requirements Section 3504(b): General IRM Policy Develop and oversee the implementation of uniform information resources management policies, principles, standards, and guidelines. OMB revised its IRM policy guidance, Circular No. A-130, to reflect the 1995 Act and to reflect the Clinger-Cohen Act of 1996 and other matters. Circular A-130 complements 5 CFR 1320, “Controlling Paperwork Burden on the Public.” OIRA’s general approach to oversight is to leverage its existing statutory processes, including the budget, regulatory review, information collection review, legislative review, Privacy Act systems of record review, and periodic reports from the agencies. Foster greater sharing, dissemination, and access to public information, including through OIRA officials acknowledged that GILS is still a requirement; however, they the use of the Government Information Locator Service (GILS); and the development of utilization of common standards for information collection, storage, and processing and communications, including standards for security interconnectivity. stated that increased use of the Internet, coupled with the development of more powerful search engines, has lessened the importance of this approach to locating government information. They highlighted the establishment of FirstGov.gov—a federal government portal that provides a single point of access to all federal government information posted on the World Wide Web—as a major accomplishment in this area. In addition, OIRA has worked with the CIO Council to establish Access America portals in the areas of health, trade, students, and seniors. OIRA does not set technical standards; OMB works with NIST and consults with the CIO Council to define policy standards for operational matters. Initiate and review proposals for changes in legislation, regulations, and agency procedures to improve information resources management practices. OIRA officials say they do not initiate legislative proposals, but review them via consultation with the CIO Council, individual agencies, and OMB’s Legislative Reference Division. Altogether, OIRA receives about 5 or 6 proposals each day. OIRA does not have a systematic process for initiating or reviewing agency procedures to improve IRM. Oversee the development and implementation of best practices in IRM, including training. OIRA officials stated that they encourage agencies to follow best practices— relying on the CIO Council’s leadership and influence. NIST disseminates security best practices. Oversee agency integration of program management functions with IRM functions. Section 3504(c): Collection and Control of Paperwork Review and approve proposed agency collections of information. OIRA officials stressed that agencies are responsible for overseeing their own management functions through the agency’s CIO. OIRA operates the paperwork clearance process established under the Paperwork Reduction Act of 1980. OIRA has draft guidance for agency compliance with the PRA’s paperwork clearance requirements (preliminary January 1997 draft, revised August 1999). In fiscal year 2001, OIRA reviewed 1,521 proposed agency collections, approved 1, 411, and disapproved 5. The remainder were withdrawn or returned to the agency. According to OIRA, the desk officers responsible for information collection review routinely coordinate collections concerning procurement and acquisition with OFPP, but such coordination is not documented. Coordinate the review of information collection concerning procurement and acquisition with the Office of Federal Procurement Policy (OFPP). Minimize information collection burden and maximize the practical utility of and public benefit from information collected. Establish and oversee standards and guidelines for estimated paperwork burden. Section 3504(d): Information Dissemination Develop and oversee the implementation of policies, principles, standards, and guidelines to apply to agency dissemination, regardless of According to OIRA officials, OMB has been in consultation with stakeholders and other interested parties to discuss the current information policies of A-130 and to According to OIRA, the information collection review process is used to minimize information collection burden and maximize practical utility and public benefit. OIRA published standards for estimating paperwork burden in 1999, and oversees implementation through the paperwork clearance process. In 1995 OMB issued guidance (M-95-22, 9/29/95) on implementing the information dissemination provisions of PRA. This guidance was incorporated into its February 1996 revisions to A-130. OIRA requirements promote public access to information. discern if they continues to address the needs of agencies and stakeholders in using government information. OIRA officials also said that oversight of this policy is accomplished through the information collection process, conversations with agency CIOs, review of agency Web sites, and discussions with agency personnel. Section 3504(e): Statistical Policy and Coordination Appoint a chief statistician to coordinate the activities of the federal statistical system. Establish an interagency council on statistical policy to advise and assist OIRA in carrying out these functions. Prepare an annual report on statistical program funding. Coordinate the federal statistical system to ensure its efficiency and effectiveness, along with the integrity, objectivity, impartiality, utility, confidentiality of information collected for statistical purposes. federal statistical system. These include the budget formulation and information collection review processes; the development and implementation of long-range plans; the issuance and revision of statistical policy standards and orders; consultation with the Interagency Council on Statistical Policy; and the activities and recommendations of interagency committees such as the Federal Committee on Statistical Methodology, the Interagency Committee for the American Community Survey, the Interagency Forum on Aging-Related Statistics, the Interagency Forum on Child and Family Statistics, and the Task Force on One-Stop Shopping for Federal Statistics. In 1997 OMB issued an order on confidentiality covering information collection by statistical agencies. The chief statistician stated that OIRA has not formally evaluated the impact of this order. However, she stated that it has been very useful to some of the statistical agencies, particularly in clarifying that confidential statistical data are not to be used for administrative or regulatory purposes. Ensure that agency budget proposals are consistent with systemwide priorities. Develop and oversee the implementation of governmentwide policies, principles, standards, and guidelines for collection methods, data classifications, dissemination, timely release, and needs for administration of federal programs. The Statistical Policy Branch coordinates the budget requests of key multiagency programs to ensure consistency with systemwide priorities. In addition, the budgets of all principal statistical agencies are reviewed by OMB’s Resource Management Organizations and the Statistical Policy Branch. According to the chief statistician, the statistical program budgets of other agencies, which account for about 60 percent of the approximately $4 billion of annual federal spending on statistics, are not covered by this review, primarily because of inadequate detail on budget materials. Statistical Policy Branch staff participate directly in the review of proposed information collection requests by federal agencies. According to the chief statistician, this participation provides the staff with oversight of the questionnaires and statistical methodologies used to collect information, as well as the use of these collections for federal program needs. OIRA has also expanded or updated classification standards for industries (1997, 2001), occupations (1998), metropolitan and micropolitan areas (2000), and race and ethnicity (1997), and is developing a new product classification system. An OMB policy directive, last updated in 1985, specifies the process for the timely release of principal economic indicators, and requires agencies to conduct periodic evaluations of the quality of those indicators. According to the chief statistician, OIRA does not conduct a formal review of these evaluations, relying on agencies to use them to improve the timeliness and quality of their statistical programs, but does use them in the information collection request and budget formulation processes. OIRA requirements Evaluate statistical program performance and agency compliance with governmentwide policies, principles, standards, and guidelines. Promote sharing of information collected for statistical purposes consistent with privacy rights and confidentiality pledges. Statistical Efficiency Act of 1999. Subsequent President’s budgets have continued to urge enactment of this legislation which would permit data sharing solely for statistical purposes for a specified group of statistical agencies. To promote data sharing consistent with privacy rights and confidentiality pledges, OMB in 1997 issued a confidentiality order for information collected by statistical agencies. OIRA officials have not formally evaluated the impact of this order, but have noted that some statistical agencies have found it very useful, particularly in clarifying that statistical data collected under a confidentiality pledge are not to be used for nonstatistical purposes, such as administrative or regulatory purposes. According to the chief statistician, OIRA has, on occasion, used the provisions of 44 U.S.C. 3509 to designate a single agency to collect and share data needed by multiple agencies (consistent with privacy rights and confidentiality pledges), thereby reducing respondent burden. Coordinate the participation of the United States in international statistical activities. Promote opportunities for training in statistical policy functions. The Statistical Policy Branch serves as the focal point for coordinating U.S. participation in international statistical activities. OIRA coordinates agency participation in statistical activities with the United Nations Statistical Division, the Organization for Economic Cooperation and Development, and the Statistical Office of the European Communities. The chief statistician represents the United States at meetings of the United Nations Statistical Commission. The chief statistician stated that through this participation, she ensures that U.S. interests are taken into account in these policy-setting forums, where programs for international statistical work are developed and adopted. She noted that in preparation for these meetings, agency views are sought on the agenda items by contacting the member agencies of the ICSP. She also stated that working through the Council, OMB ensures that the appropriate technical experts represent the United States in various subject matter meetings and in international standards development work. According to the chief statistician, the Statistical Policy Branch encourages agencies to send staff to OIRA to be trained. For each of the past 6 years, agency staff have worked at OIRA, participating in such activities as the preparation of the annual report on statistical programs and the review of information collection requests. Section 3504(f): Records Management Provide advice and assistance to the Archivist of the United States and the Administrator of General Services to promote coordination of records management requirements with IRM policies, principles, standards, and guidelines. Review agency compliance with records management legal and regulatory requirements. OMB officials stated that OIRA relies heavily on NARA to take leadership for records management policy. OIRA officials stated that they and OMB budget examiners work closely with both NARA and GSA. They have provided advice countless times, but these interactions are informal and therefore undocumented. Oversee the application of records management policies, principles, standards, and guidelines, including the requirements for archiving information maintained in electronic programs adequately document agency activities and incorporate records management functions into the design, development, and implementation of information systems. Actions taken OIRA officials stated that they oversee agency application of records management policies through the information collection budget and review processes. According to OMB officials, an e-government initiative on e-records management will provide a framework for this. Section 3504(g): Privacy and Security Develop and oversee the implementation of policies, principles, standards, and guidelines on privacy, confidentiality, security, disclosure and sharing of information, and security. Oversee and coordinate compliance with the Freedom of Information Act, the Privacy Act, and the Computer Security Act of 1987, and related information management laws. OMB Circular A-130 provides implementing guidance to agencies on security and privacy. In addition, it contains specific guidance on federal agency responsibilities for maintaining records about individuals (app. I) and on security of federal automated information resources (app. III). Further, OIRA has issued several memoranda addressing such issues as interagency data sharing, Internet privacy issues, and the need to incorporate security and privacy in information systems design and investment. According to OIRA, it oversees and coordinates compliance with the Computer Security Act through the provisions of the Government Information Security Reform Act that require agencies to engage in systematic self-reporting on their computer security programs. OIRA oversees the Privacy Act though its reporting requirements and review of agency notices for new or modified Privacy Act systems of records. Freedom of Information Act oversight is given to the Department of Justice, although OMB provides guidance on fees. OIRA also receives and reviews all agency inspector general reports and annual reports, monitors GSA’s incident report tracking system, and reviews the integration of IT security in the budget process and the capital planning and investment control process. A-130 requires a risk-based approach to information security and stipulates that new or continued funding for IT systems is contingent on meeting security criteria. OIRA officials again emphasized that it is the individual agency’s responsibility to provide appropriate risk-based security protections. Require agencies to identify and afford security protections commensurate with the risk and management of the harm resulting from the loss, misuse, or unauthorized access to or modification of information. Section 3504(h): Federal Information Technology In consultation with the Director of NIST and the Administrator of General Services, develop Services Administration in developing policy and guidance. and oversee the implementation of policies, principles, standards, and guidelines for information technology functions and system standards. Monitor the effectiveness of, and compliance with, directives issued under the Clinger-Cohen Act and relative to the IT fund. According to OIRA officials, OIRA staff routinely consult with NIST and the General OIRA holds annual capital planning and investment control meetings with individual agencies to judge the well being of IT portfolios. OIRA officials stated that they maintain a database to track agency portfolios over time, but consider this information to be “pre-decisional”; it was thus not made available to us. However, additional detail on agency IT portfolios was provided in the 2003 budget. OIRA officials collaborate with the Office of Federal Procurement Policy on issues related to IT procurement and acquisition. Coordinate the development and review of IRM policy associated with procurement and acquisition with the Office of Federal Procurement Policy. Ensure (1) agency integration of IRM plans, program plans, and budgets for acquisition and use of IT; and (2) the efficiency and effectiveness of interagency IT initiatives. OIRA officials use the budget and capital planning processes, in addition to the guidance in A-130, to ensure agency integration of IRM plans and budgets. OIRA requirements Promote the use of IT to improve the productivity, efficiency, and effectiveness of federal programs. Actions taken OIRA works closely with the CIO Council to ensure the efficiency and effectiveness of interagency IT initiatives. OIRA promotes the use of information technology by participating in interagency meetings, through the information collection review process, and desk officer liaison activities with agencies. According to OIRA officials, OIRA uses requirements for capital planning and investment control processes, enterprise architectures, and business cases during the budget process to improve how agencies plan, acquire, and manage IT. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection (GAO-02-235T, November 15, 2001) Computer Security: Improvements Needed to Reduce Risk to Critical Federal Operations and Assets (GAO-02-231T, November 9, 2001) Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs (GAO-02-160T, November 7, 2001) Electronic Government: Better Information Needed on Agencies’ Implementation of the Government Paperwork Elimination Act (GAO- 01-1100, September 28, 2001) Homeland Security: A Framework for Addressing the Nation’s Efforts (GAO-01-1158T, September 21, 2001) Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, September 20, 2001) Electronic Government: Challenges Must Be Addressed With Effective Leadership and Management (GAO-01-959T, July 11, 2001) Information Management: Dissemination of Technical Reports (GAO-01- 490, May 18, 2001) Internet Privacy: Implementation of Federal Guidance for Agency Use of “Cookies” (GAO-01-424, April 27, 2001) Paperwork Reduction Act: Burden Estimates Continue to Increase (GAO- 01-648T, April 24, 2001) Record Linkage and Privacy: Issues in Creating New Federal Research and Statistical Information (GAO-01-126SP, April 2001) Information Management: Electronic Dissemination of Government Publications (GAO-01-428, March 30, 2001) Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, March 21, 2001) Information Management: Progress in Implementing the 1996 Electronic Freedom of Information Act Amendments (GAO-01-378, March 16, 2001) High-Risk Series: An Update (GAO-01-263, January 2001) Major Management Challenges and Program Risks: A Governmentwide Perspective (GAO-01-241, January 2001) Determining Performance and Accountability Challenges and High Risks (GAO-01-159SP, November 2000) Electronic Government: Opportunities and Challenges Facing the FirstGov Web Gateway (GAO-01-87T, October 2, 2000) Federal Chief Information Officer: Leadership Needed to Confront Serious Challenges and Emerging Issues (GAO/T-AIMD-00-316, September 12, 2000) Year 2000 Computing Challenge: Lessons Learned Can Be Applied to Other Management Challenges (GAO/AIMD-00-290, September 12, 2000) Internet Privacy: Agencies’ Efforts to Implement OMB’s Privacy Policy (GAO/GGD-00-191, September 5, 2000) Congressional Oversight: Challenges for the 21st Century (GAO/T-OCG- 00-11, July 20, 2000) Revisions to OMB’s Circular A-130 (GAO/AIMD-00-183R, May 23, 2000) Paperwork Reduction Act: Burden Increases at IRS and Other Agencies (GAO/T-GGD-00-114, April 12, 2000) Office of Management and Budget: Future Challenges to Management (GAO/T-GGD/AIMD-00-141, April 7, 2000) Managing in the New Millennium: Shaping a More Efficient and Effective Government for the 21st Century (GAO/T-OCG-00-9, March 29, 2000) Year 2000 Computing Challenge: Federal Business Continuity and Contingency Plans and Day One Strategies (GAO/T-AIMD-00-40, October 29, 1999) Managing for Results: Answers to Hearing Questions on Quality Management (GAO/GGD-99-181R, September 10, 1999) National Archives: Preserving Electronic Records in an Era of Rapidly Changing Technology (GAO/GGD-99-94, July 19, 1999) Paperwork Reduction Act: Burden Increases and Unauthorized Information Collections (GAO/T-GGD-99-78, April 15, 1999) Government Management: Observations on OMB’s Management Leadership Efforts (GAO/T-GGD/AIMD-99-65, February 4, 1999) Information Security: Serious Weaknesses Place Critical Federal Operations and Assets at Risk (GAO/AIMD-98-92, September 23, 1998) Regulatory Management: Implementation of Selected OMB Responsibilities Under the Paperwork Reduction Act (GAO/GGD-98-120, July 9, 1998) Government Management: Observations on OMB’s Management Leadership Efforts (GAO/T-GGD/AIMD-98-148, May 12, 1998) Statistical Agencies: Proposed Consolidation and Data Sharing Legislation (GAO/T-GGD-98-91, March 26, 1998) Managing for Results: Observations on Agencies’ Strategic Plans (GAO/T-GGD-98-66, February 12, 1998) Managing for Results: Agencies’ Annual Performance Plans Can Help Address Strategic Planning Challenges (GAO/GGD-98-44, January 30, 1998) Managing for Results: Observations on OMB’s September 1997 Strategic Plan (GAO/T-AIMD/GGD-98-10, October 6, 1997) Agencies’ Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review (GAO/GGD-10.1.16, May 1997) Statistical Agencies: Consolidation and Quality Issues (GAO/T-GGD-97- 78, April 9, 1997) Managing for Results: Enhancing the Usefulness of GPRA Consultations Between the Executive Branch and Congress (GAO/T-GGD-97-56, March 10, 1997) Information Technology Investment: Agencies Can Improve Performance, Reduce Costs, and Minimize Risks (GAO/AIMD-96-64, September 30, 1996) Information Management Reform: Effective Implementation Is Essential for Improving Federal Performance (GAO/T-AIMD-96-132, July 17, 1996) Statistical Agencies: Statutory Requirements Affecting Government Policies and Programs (GAO/GGD-96-106, July 17, 1996) Federal Statistics: Principal Statistical Agencies’ Missions and Funding (GAO/GGD-96-107, July 1, 1996) Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996) Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994)
Congress passed the Paperwork Reduction Act (PRA) to establish a single, overarching policy framework for the management of government information resources. The act established information resources management (IRM) as an approach governing the collection, dissemination, security, privacy, and management of information. The act also created the Office of Information and Regulatory Affairs (OIRA) to provide leadership, policy direction, and oversight of governmentwide IRM. It further required OIRA to develop and maintain a governmentwide strategic IRM plan and charged that office with responsibilities for general IRM policy and information technology. Although OIRA designated the Chief Information Officers Council's strategic plan for fiscal years 2001-2002 as the governmentwide strategic IRM plan required by the PRA, this does not constitute an effective and comprehensive strategic vision. OIRA has issued policy and implementing guidance, conducted oversight activities, and taken various steps in each of the functional areas. GAO found that the documents cited by OMB during it's review did not, separately or collectively, meet the requirements for a governmentwide strategic IRM plan established by PRA.
Customs began ACE in 1994, and its early estimate of the cost and time to develop the system was $150 million over 10 years. At this time, Customs also decided to first develop a prototype of ACE, referred to as NCAP (National Customs Automation Program prototype), and then to complete the system. In May 1997, we testified that Customs’ original schedule for completing the prototype was January 1997, and that Customs did not have a schedule for completing ACE. At that time, Customs agreed to develop a comprehensive project plan for ACE. In November 1997, Customs estimated that the system would cost $1.05 billion to develop, operate, and maintain throughout its life cycle. Customs plans to develop and deploy the system in 21 increments from 1998 through 2005, the first four of which would constitute NCAP. Currently, Customs is well over 2 years behind its original NCAP schedule. Because Customs experienced problems in developing NCAP software in- house, the first NCAP release was not deployed until May 1998—16 months late. In view of the problems it experienced with the first release, Customs contracted out for the second NCAP release and deployed this release in October 1998—21 months later than originally planned. Customs’ most recent dates for deploying the final two NCAP releases (0.3 and 0.4) are March 1999 and September 1999, which are 26 and 32 months later than the original deployment estimates, respectively. According to Customs, these dates will slip farther because of funding delays. Additionally, Customs officials told us that a new ACE life cycle cost estimate is being developed, but that it was not ready to be shared with us. At the time of our review, Customs’ $1.05 billion estimate developed in 1997 was the official ACE life cycle cost estimate. However, a January 1998 ACE business plan specifies a $1.48 billion life cycle cost estimate. Customs is not building ACE within the context of an enterprise systems architecture, or “blueprint” of its agencywide future systems environment. Such an architecture is a fundamental component of any rationale and logical strategic plan for modernizing an organization’s systems environment. As such, the Clinger-Cohen Act requires agency chief information officers (CIO) to develop, maintain, and implement an information technology (IT) architecture. Also, the Office of Management and Budget (OMB) issued guidance in 1996 that requires agency IT investments to be architecturally compliant. These requirements are consistent with, and in fact based on, IT management practices of leading private and public sector organizations. Simply stated, an enterprise systems architecture specifies the system (e.g., software, hardware, communications, security, and data) characteristics that the organization’s target systems environment is to possess. Its purpose is to define, through careful analysis of the organization’s strategic business needs and operations, the future systems configuration that supports not only the strategic business vision and concept of operations, but also defines the optimal set of technical standards that should be met to produce homogeneous systems that can interoperate effectively and be maintained efficiently. Our work has shown that in the absence of an enterprise systems architecture, incompatible systems are produced that require additional time and resources to interconnect and to maintain and that suboptimize the organization’s ability to perform its mission. We first reported on Customs’ need for a systems architecture in May 1996 and testified on this subject in May 1997. In response, Customs developed and published an architecture in July and August 1997. We reviewed this architecture and reported in May 1998 that it was not effective because it was neither complete nor enforced. For example, the architecture did not 1. fully describe Customs’ business functions and their relationships, 2. define the information needs and flows among these functions, and 3. establish the technical standards, products, and services that would be characteristic of its target systems environment on the basis of these business specifications. Accordingly, we recommended that Customs complete its enterprise information systems architecture and establish compliance with the architecture as a requirement of Customs’ information technology investment management process. In response, Customs agreed to develop a complete architecture and establish a process to ensure compliance. Customs is in the process of developing the architecture and reports that it will be completed in May 1999. Also, in January 1999, Customs reported that it changed its internal procedures to provide for effective enforcement of its architecture, once it is completed. Until the architecture is completed and enforced, Customs risks spending millions of dollars to develop, acquire, and maintain information systems, including ACE, that do not effectively and efficiently support the agency’s mission needs. Effective IT investment management is predicated on answering one basic question: Is the organization doing the “right thing” by investing specified time and resources in a given project or system? The Clinger-Cohen Act and OMB guidance together provide an effective IT investment management framework for answering this question. Among other things, they set requirements for 1. identifying and analyzing alternative system solutions, 2. developing reliable estimates of the alternatives’ respective costs and benefits and investing in the most cost beneficial alternative, and 3. to the maximum extent practical, structuring major projects into a series of increments to ensure that each increment constitutes a wise investment. Customs did not satisfy any of these requirements for ACE. First, Customs did not identify and evaluate a full range of alternatives to its defined ACE solution before commencing development activities. For example, Customs did not consider how ACE would relate to another Treasury- proposed system for processing import trade data, known as the International Trade Data System (ITDS), including considering the extent to which ITDS should be used to satisfy needed import processing functionality. Initiated in 1995 as a project to develop a coordinated, governmentwide system for the collection, use, and dissemination of trade data, the ITDS project is headed by the Treasury Deputy Assistant Secretary for Regulatory, Tariff and Trade Enforcement. The system is expected to reduce the burden federal agencies place on organizations by requiring that they respond to duplicative data requests. Treasury intends for the system to serve as the single point for collecting, editing, and validating trade data as well as collecting and accounting for trade revenue. At the time of our review of ACE, these functions were also planned for ACE. Similarly, Customs did not evaluate different ACE architectural designs, such as the use of a mainframe-based versus client/server-based hardware architecture. Also, Customs did not evaluate alternative development approaches, such as acquisition versus in-house development. In short, Customs committed to and began building ACE without knowing whether it had chosen the most cost-effective alternative and approach. Second, Customs did not develop a reliable life cycle cost estimate for the approach it selected. SEI has developed a method for project managers to use to determine the reliability of project cost estimates. Using SEI’s method, we found that Customs’ $1.05 billion ACE life cycle cost estimate was not reliable, and that it did not provide a sound basis for Customs’ decision to invest in ACE. For example, in developing the cost estimate, Customs (1) did not use a cost model, (2) did not account for changes in its approach to building different ACE increments, (3) did not account for changes to ACE software and hardware architecture, and (4) did not have historical project cost data upon which to compare its ACE estimate. Moreover, the $1.05 billion cost estimate used to economically justify ACE omitted relevant costs. For instance, the costs of technology refreshment and system requirements definition were not included (see table 1). Exacerbating this problem, Customs represented its ACE cost estimate as a precise point estimate rather than explicitly disclosing to investment decisionmakers in Treasury, OMB, and Congress the estimate’s inherent uncertainty. Customs’ projections of ACE benefits were also unreliable because they were either overstated or unsupported. For example, the analysis includes $203.5 million in savings attributable to 10 years of avoided maintenance and support costs on the Automated Commercial System (ACS)—the system ACE is to replace. However, Customs would not have avoided maintenance and support costs for 10 years. At the time of Customs’ analysis, it planned to run both systems in parallel for 4 years, and thus planned to spend about $53 million on ACS maintenance and support during this period. As another example, $650 million in savings was not supported by verifiable data or analysis, and $644 million was based on assumptions that were analytically sensitive to slight changes, making this $644 million a “best case” scenario. Third, Customs is not making its investment decisions incrementally as required by the Clinger-Cohen Act and OMB. Although Customs has decided to implement ACE as a series of 21 increments, it is not justifying investing in each increment on the basis of defined costs and benefits and a positive return on investment for each increment. Further, once it has deployed an increment at a pilot site for evaluation, it is not validating the benefits that the increment actually provides, and it is not accounting for costs on each increment so that it can demonstrate that a positive return on investment was actually achieved. Instead, Customs estimated the costs and benefits for the entire system--all 21 increments, and used this as economic justification for ACE. Mr. Chairman, our work has shown that such estimates of many system increments to be delivered over many years are impossible to make accurately because later increments are not well understood or defined. Also, these estimates are subject to change in light of experiences on nearer term increments and changing business needs. By using an inaccurate, aggregated estimate that is not refined as increments are developed, Customs is committing enormous resources with no assurance that it will achieve a reasonable return on its investment. This “grand design” approach to managing large system modernization projects has repeatedly proven to be ineffective across the federal government, resulting in huge sums invested in systems that do not provide expected benefits. Failure of the grand design approach was a major impetus for the IT management reforms contained in the Clinger-Cohen Act. Software process maturity is one important and recognized measure of determining whether an organization is managing a system or project the “right way,” and thus whether or not the system will be completed on time and within budget and will deliver promised capabilities. The Clinger- Cohen Act requires agencies to implement effective IT management processes, such as processes for managing software development and acquisition. SEI has developed criteria for determining an organization’s software development and acquisition effectiveness or maturity. Customs lacks the capability to effectively develop or acquire ACE software. Using SEI criteria for process maturity at the “repeatable” level, which is the second level on SEI’s five-level scale and means that an organization has the software development/acquisition rigor and discipline to repeat project successes, we evaluated ACE software processes. In February 1999, we reported that the software development processes that Customs was employing on NCAP 0.1, the first release of ACE, were not effective. For example, we reported that Customs lacked effective software configuration management, which is important for establishing and maintaining the integrity of the software products during development. Also, we reported that Customs lacked a software quality assurance program, which greatly increased the risk of ACE software not meeting process and product standards. Further, we reported that Customs lacked a software process improvement program to effectively address these and other software process weaknesses. Our findings concerning ACE software development maturity are summarized in table 2. As discussed in our brief history of ACE, after Customs developed NCAP 0.1 in-house, it decided to contract out for the development of NCAP 0.2, thus changing its role on ACE from being a software developer to being a software acquirer. According to SEI, the capabilities needed to effectively acquire software are different than the capabilities needed to effectively develop software. Regardless, we reported later in February 1999 that the software acquisition processes that Customs was employing on NCAP 0.2 were not effective. For example, Customs did not have an effective software acquisition planning process and, as such, could not effectively establish reasonable plans for performing software engineering and for managing the software project. Also, Customs did not have an effective evaluation process, meaning that it lacked the capability for ensuring that contractor-developed software satisfied defined requirements. Our findings concerning ACE software acquisition maturity are summarized in table 3. To address ACE management weaknesses, we recommended that Customs analyze alternative approaches to satisfying its import automation needs, including addressing the ITDS/ACE relationship; invest in its defined ACE solution incrementally, meaning for each system increment (1) rigorously estimate and analyze costs and benefits, (2) require a favorable return-on-investment and compliance with Customs’ enterprise systems architecture, and (3) validate actual costs and benefits once an increment is piloted, compare actuals to estimates, use the results in deciding on future increments, and report the results to congressional authorizers and appropriators; establish an effective software process improvement program and correct the software process weaknesses in our report, thereby bringing ACE software process maturity to a least an SEI level 2; and require at least SEI level 2 processes of all ACE software contractors. In his February 16, 1999, comments on a draft of our report, the Commissioner of Customs agreed with our findings and committed to implementing our recommendations. On April 1, 1999, the Commissioner provided us a status report on Customs efforts to do so. In brief, the Commissioner stated that Customs is conducting and will conduct additional analyses to consider alternative approaches to ACE, and will base these analyses on the assumption that Customs will use and not duplicate ITDS functionality; is developing the capability to perform cost-benefit analyses of ACE increments, and is and will conduct postimplementation reviews of ACE increments; has retained an audit firm to independently validate cost-benefit is developing software process improvement plans to achieve software process maturity of level 2 and then level 3; and is preparing a directive to require at least level 2 processes of all Customs software contractors. Additionally, the Commissioner stated that Customs is developing a plan for engaging a prime integration contractor that is at least SEI level 3 certified. Under this approach, the prime contractor would assist Customs in implementing effective system/software engineering processes and would engage subcontractors to meet specified system development and maintenance needs. Successful systems modernization is absolutely critical to Customs’ ability to perform its trade import mission efficiently and effectively in the 21st century. Systems modernization success, however, depends on doing the “right thing, the right way.” To be “right,” organizations must (1) invest in and build systems within the context of a complete and enforced enterprise systems architecture, (2) make informed, data-driven decisions about investment options based on expected and actual return-on-investment for system increments, and (3) build system increments using mature software engineering practices. Our reviews of agency system modernization efforts over the last 5 years point to weaknesses in these three areas as the root causes of their not delivering promised system capabilities on time andwithin budget. Until Customs corrects its ACE management and technical weaknesses, the federal government’s troubled experience on other modernization efforts is a good indicator for ACE. In fact, although Customs does not collect data to know whether the first two ACE releases are already falling short of cost and performance expectations, the data it does collect on meeting milestones show that the first two releases have taken about 2 years longer than originally planned. This is precisely the type of unaffordable outcome that can be avoided by making the management and technical improvements we recommended. Fortunately, Customs fully recognizes the seriousness of the situation and has committed to correcting its ACE management and technical weaknesses. We are equally committed to working with Customs as it strives to do so and with Congress as it oversees this important initiative. This concludes my statement. I would be glad to respond to any questions that you or other Members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Customs Service's management of its Automated Commercial Environment (ACE) system. GAO noted that: (1) the need to leverage information technology to improve the way that Customs does business in the import arena is undeniable; (2) Customs' existing import processes and supporting systems are simply not responsive to the business needs of either Customs or the trade community, whose members collectively import about $1 trillion in goods annually; (3) these existing processes and systems are paper-intensive, error-prone, and transaction-based, and they are out of step with the just-in-time inventory practices used by the trade; (4) recognizing this, Congress enacted the Customs Modernization and Informed Compliance Act to define legislative requirements for improving import processing through an automated system; (5) Customs fully recognizes the severity of the problems with its approach to managing import trade and is modernizing its import processes and undertaking ACE as its import system solution; (6) begun in 1994, Customs' estimate of the system's 15-year life cycle cost is about $1.05 billion, although this estimate is being increased; (7) in light of ACE's enormous mission importance and price tag, Customs' approach to investing in and engineering ACE demands disciplined and rigorous management practices; (8) such practices are embodied in the Clinger-Cohen Act of 1996 and other legislative and regulatory requirements, as well as accepted industry system/software engineering models, such as those published by the Software Engineering Institute; (9) unfortunately, Customs has not employed such practices to date on ACE; (10) GAO's February 1999 report on ACE describes serious management and technical weaknesses in Customs' management of ACE; and (11) the ACE weaknesses are: (a) building ACE without a complete and enforced enterprise systems architecture; (b) investing in ACE without a firm basis for knowing that it is a cost effective system solution; and (c) building ACE without employing engineering rigor and discipline.
To qualify for SNF care, Medicare beneficiaries typically need to be admitted to a SNF within 30 days after discharge from a hospital stay of at least 3 days and need care for a condition that was treated during the hospital stay or that arose while receiving SNF care. Medicare may cover up to 100 days per episode of SNF care. Many SNFs also provide long- term care, which Medicare does not cover, to Medicaid or private paying residents. Medicaid, the joint federal-state program for certain low-income individuals, is the primary payer for over 60 percent of SNF residents. Industry advocates have raised questions about Medicaid payment rates in many states being lower than the costs of providing care. While Medicare and Medicaid separately certify SNFs, nearly all SNFs have dual certification and participate in both programs. SNF residents who do not qualify for Medicare or Medicaid may have private insurance pay for their care or they may pay out of pocket. SNFs differ by type of ownership. As of 2014, 70 percent of SNFs were for-profit, 24 percent were nonprofit, and 5 percent were operated by government agencies. In general, for-profit SNFs have a goal of making profits that are distributed among their owners and stockholders. For example, several studies have demonstrated that for-profit SNFs generally have lower nurse-to-resident staffing ratios compared with nonprofit SNFs, likely allowing them to reduce their personnel costs and increase their margins. Nonprofit SNFs receive favorable tax status because they are not allowed to operate for the benefit of private interests. SNFs also vary by chain affiliation. About three-fifths of SNFs were owned or operated by chains (entities that own multiple facilities), while the remainder were independent in 2014, the latest year for which data were available. While most chain-affiliated SNFs are for-profit, some are nonprofit or government-operated. Chains may develop complex administrative structures to spread expenses across multiple SNFs. Researchers have raised questions about the effects of chain ownership on SNF quality of care. SNFs employ three types of nursing staff: RNs, LPNs, and CNAs. As we have previously reported, the responsibilities and salaries of these three types of nurses are related to their levels of education. The staffing mix, or the balance SNFs maintain among RNs, LPNs, and CNAs, is generally related to the needs of the residents served. For example, a higher proportion of RNs may be employed to meet residents’ needs in SNFs that serve more residents with acute care needs or in SNFs with specialty care units (such as units for residents who require ventilators). However, SNFs may be unable to pursue their ideal staffing mix for reasons such as high turnover among LPNs and CNAs. Licensed Nurses and Nurse Aides Registered nurses (RN) have at least a 2-year degree and are licensed in a state. Because of their advanced training and ability to provide skilled nursing care, RNs are paid more than other nursing staff. Generally, RNs are responsible for managing residents’ nursing care and performing complex procedures, such as starting intravenous feeding or fluids. Licensed practical nurses (LPN) have a 1-year degree, are also licensed by the state, and typically provide routine bedside care, such as taking vital signs. Certified nursing assistants (CNA) are nurse aides or orderlies who work under the direction of licensed nurses, have at least 75 hours of training, and have passed a competency exam. CNAs’ responsibilities usually include assisting residents with eating, dressing, bathing, and toileting. CNAs typically have more contact with residents than other nursing staff and provide the greatest number of hours of care per resident day. CNAs generally are paid less than RNs and LPNs. There are no federal minimum standards linking SNFs’ nurse staffing to the number of residents, but SNFs that participate in both Medicare and Medicaid are required to have sufficient nursing staff to provide nursing and related services to allow each resident to attain or maintain the highest practicable physical, mental, and psychosocial well-being. In general, every SNF must have licensed nurses (RNs or LPNs) on duty around the clock, including one RN on duty for at least 8 consecutive hours per day, 7 days per week. According to one study, 34 states had established additional minimum requirements for the number of nursing hours per resident day as of 2010. Researchers have found that higher total nurse staffing levels (RNs, LPNs, and CNAs combined) and higher RN staffing levels are typically associated with higher quality of care, as shown by a wide range of indicators. For example, lower total nurse and RN staffing levels have been linked to higher rates of deficiency citations, which may involve actual harm or immediate jeopardy to residents. In addition, higher total nurse and RN staffing levels have been associated with better health outcomes, such as fewer cases of pressure ulcers, urinary tract infections, malnutrition, and dehydration, as well as improved resident functional status. In 2001, a CMS contractor reported the effect of nurse staffing on quality of care in SNFs. The contractor identified staffing thresholds in both a short-stay sample of Medicare SNF admissions and a long-stay sample of nursing home residents who were in the facility for at least 90 days. These thresholds demonstrated incremental benefits of nurse staffing; once these thresholds were met, there were no additional benefits in terms of quality. For the short-stay sample, the thresholds were 0.55 hours per resident day for RNs and 3.51 hours per resident day for all nurses. For the long-stay sample, the thresholds were 0.75 hours per resident day for RNs and 4.08 hours per resident day for all nurses. PPACA required SNFs to separately report expenditures for wages and benefits for direct care staff, including specific data on RNs, LPNs, CNAs, and other medical and therapy staff, and required CMS to redesign the cost report in consultation with private sector accountants experienced with SNF cost reports to meet this requirement. The act also required CMS, in consultation with others, to categorize the expenditures listed on the cost report, regardless of any source of payment for such expenditures, into four functional accounts—direct care (including nursing, therapy, and medical services), indirect care (including housekeeping and dietary services), capital assets (including building and land costs), and administrative services—annually. Finally, the act required CMS to make information on SNFs’ expenditures “readily available to interested parties upon request.” CMS collects detailed SNF expenditure data in Medicare cost reports and posts the raw data on its website for the public. On their cost reports, SNFs must disclose total costs and allocate general services costs such as housekeeping and nursing administration. CMS officials told us they modified the cost report as required by PPACA in December 2011. Effective for cost reporting periods beginning on or after January 1, 2012, CMS required SNFs to provide expenditure data for full-time and part- time direct care employees who are directly hired and under contract. CMS officials said the agency implemented the PPACA requirement to make information on SNFs’ expenditures “readily available to interested parties upon request” by posting the raw data on its website. The CMS website contains the raw cost report data that SNFs submitted for fiscal years 1995 through 2015. The website also notes that CMS “has made a reasonable effort to ensure that the provided data/records/reports are up- to-date, accurate, complete, and comprehensive at the time of disclosure.” However, CMS has not taken two key steps to make SNF expenditure data readily accessible on the basis of our interviews with public stakeholders and our observations. First, CMS has not provided the expenditure data in an accessible way. The data’s format, volume, and organization can make it difficult for public stakeholders to use the data. CMS posts data for each fiscal year across three separate files. Because of how CMS formats the data, users need certain software packages and programming skills to analyze data for each fiscal year. In addition, CMS has acknowledged that the data files are so large that some users have been unable to download them. One of the researchers we interviewed stated that the amount of time needed to analyze the data typically requires a grant. CMS also does not organize SNF expenditures in a meaningful way for analysis. For example, 12 of the 15 cost centers in the general services category are related to indirect care, so a user must make additional calculations to determine a SNF’s total indirect care costs. Second, CMS has not provided the expenditure data in a place that is easy to find on its website. For example, representatives of the two beneficiary advocacy organizations we interviewed told us they were unable to find the cost report data on the CMS website and noted the importance of making the SNF expenditure data easy to locate. CMS officials told us they did not know who would use SNF expenditure data or for what purpose. Public stakeholders could make better use of SNF expenditure data if CMS took steps to make the data more accessible. For example, representatives of the two beneficiary advocacy organizations and one researcher we interviewed said CMS could incorporate summary expenditure measures into Nursing Home Compare, the CMS website that contains summary measures of SNF quality. Prior research has demonstrated that presenting cost and finance measures in a manner that consumers can easily interpret, displaying them alongside quality data, and focusing on information that is relevant to consumers can help increase their effectiveness. For example, the California HealthCare Foundation’s CalQualityCare.org website provides ideas on how to communicate SNF expenditure measures to consumers. This website allows consumers to find facility data and compare long-term care providers across California. CMS officials told us that adding SNF expenditure measures to Nursing Home Compare is a possibility in the next 2 to 5 years. The officials noted that, as of December 2015, CMS had not begun considering the posting of SNF expenditure data on Nursing Home Compare nor begun systematically evaluating how to publicly report expenditure measures. The officials explained that the agency is currently focused on implementing an electronic system for collecting SNF direct care staffing data (known as the Payroll-Based Journal) and making changes to existing measures in Nursing Home Compare this year. In March 2016, CMS released a public data set on its website that contains information on utilization, payments, and submitted charges for services SNFs provided to Medicare beneficiaries in 2013. Upon releasing the data set, CMS officials stated they were committed to greater data transparency. In making data accessible to public stakeholders, federal internal control standards related to external communication suggest that agencies consider the audience, nature of information, availability, cost, and legal or regulatory requirements to ensure that information is communicated in a quality manner. Until CMS takes steps to make SNF expenditure data easier to use and locate, public stakeholders will have difficulty accessing the only publicly available source of financial data for many SNFs. Despite CMS’s statement that it has made a reasonable effort to ensure the accuracy of SNF cost report data, we found that the agency performs minimal quality control of the SNF expenditure data in the Medicare cost reports to ensure data reliability. Instead, CMS largely relies on SNFs to validate their own data. CMS requires SNFs to self-certify to the accuracy and completeness of their cost report data. However, according to CMS officials and one researcher we interviewed, there is little incentive for SNFs to ensure the accuracy and completeness of their data because the data do not affect the amount of Medicare payments each SNF receives. However, CMS does use the cost report data to update overall SNF payment rates. Despite this, CMS officials told us the agency conducts “extremely limited” reviews of cost report data because of funding and resource constraints. The officials said they rarely adjust SNFs’ reported costs and focus instead on improper payment reviews. For these reasons, CMS officials and the two researchers we interviewed told us they could not place full confidence in the reliability of the SNF expenditure data in the cost reports. Federal internal control standards require agencies to use quality information. The standards highlight the importance of processing obtained data into quality information, central to which is its accessibility and reliability. Reliable information that is accurate and complete can help agencies evaluate performance, make informed decisions, address risks, and achieve key objectives. Until CMS takes steps to ensure the accuracy and completeness of the SNF expenditure data, the data’s reliability cannot be ensured. Our analysis found that, for each fiscal year from 2011 through 2014, direct and indirect care costs were lower as a percentage of revenue, on average, at for-profit SNFs compared with nonprofit and government SNFs. Costs were similarly lower at chain SNFs compared with independent SNFs. Over the 4-year period we examined, the percentage of revenue spent on direct and indirect care remained relatively constant, on average, at for-profit and nonprofit SNFs but decreased at government SNFs. For both chain and independent SNFs, the percentage of revenue spent on direct and indirect care remained relatively constant, on average, from fiscal years 2011 through 2014. (See fig. 1.) For for-profit and nonprofit SNFs, both overall costs and total revenues increased, on average, in each of the 4 years we examined. For example, for-profit and nonprofit SNFs generally had small annual increases in their direct care costs. However, because their revenue also increased slightly each year, on average, their direct care costs remained relatively constant as a percentage of revenue. Both overall costs and direct care costs decreased, on average, at government SNFs in each fiscal year from 2011 through 2014. Total revenues also decreased, on average, at government SNFs between fiscal years 2011 and 2014. Regardless of ownership type and chain affiliation, SNFs’ costs for capital-related assets and administrative services accounted for a similar percentage of revenue, on average, during each fiscal year from 2011 through 2014. According to the cost report data, capital-related asset costs accounted for 4 percent to 7 percent of revenue, on average, at for- profit, nonprofit, and government SNFs in each year. Similarly, costs for capital-related assets at chain and independent SNFs generally accounted for 5 percent to 6 percent of revenue, on average, in each year. During these 4 years, costs for administrative services accounted for 8 percent to 9 percent of revenue, on average, regardless of ownership type and chain affiliation. In addition, median margins were higher for for-profit and chain SNFs than for other SNFs. As a group, for-profit SNFs had a higher median margin (between 16 percent and 19 percent) than nonprofit and government SNFs (between 12 percent and 15 percent and between 3 percent and 13 percent, respectively) for each fiscal year between 2011 and 2014. Similarly, median margins were generally higher at chain SNFs (between 16 percent and 19 percent) than at independent SNFs (between 12 percent and 17 percent) in each year. All SNF organization types had positive median all-payer margins each year, meaning that their payments more than covered their costs. Moreover, from fiscal years 2011 through 2014, median margins increased regardless of ownership type and chain affiliation, but the amount of the increase differed between organization types. The median margin increased more at government SNFs than at for-profit and nonprofit SNFs and more at independent SNFs than at chain SNFs. During the 4-year period, the median margin at government SNFs increased 10 percentage points (from 3 percent to 13 percent), while it increased 3 percentage points at for-profit SNFs (from 16 percent to 19 percent) and at nonprofit SNFs (from 12 percent to 15 percent). In addition, independent SNFs’ median margin increased by 5 percentage points (from 12 percent to 17 percent) and chain SNFs’ median margin increased by 3 percentage points (from 16 percent to 19 percent). (See fig. 2.) SNFs’ nursing staff levels, as measured by nurse time per resident day, were relatively stable for fiscal years 2012 through 2014, but there was some variation by type of ownership. For-profit SNFs generally had less nursing time per resident day in each of the 3 years we examined. After our adjustment for resident case-mix, we continued to observe the same trends. These trends were generally consistent with the small annual increases in direct care costs we observed at for-profit and nonprofit SNFs during this period. Table 1 shows SNFs’ reported (unadjusted) and adjusted total nurse and RN time per resident day. Examining each fiscal year separately, we estimated that a SNF’s margin generally had a small, but statistically significant, effect on its nursing time per resident day. After controlling for other factors, we estimated that a SNF’s case-mix adjusted total nurse and RN time per resident day (reflecting the time nurses spend on both direct patient care and administrative duties) decreased slightly as its margin increased. For fiscal year 2012, we estimated that if a SNF with a margin of 20 percent and a case-mix adjusted total nurse time of 4 hours per resident day increased its margin to 21 percent, its total nurse time would fall to 3 hours and 51.9 minutes per resident day (a decrease of 8.1 minutes). For the same year, we estimated that a SNF’s case-mix adjusted RN time per resident day decreased by 0.6 minutes for each percentage point increase in its margin. Similarly, for fiscal year 2013, we estimated that a 1 percentage point increase in a SNF’s margin decreased its case-mix adjusted total nurse time per resident day by 5.1 minutes and its case-mix adjusted RN time per resident day by 0.4 minutes. Finally, for fiscal year 2014, we estimated that a SNF’s case-mix adjusted total nurse time per resident day decreased by 7.4 minutes and its case-mix adjusted RN time per resident day decreased by 0.2 minutes for each percentage point increase in its margin. In each of the 3 fiscal years, which we examined separately, the relationship between SNF nursing time and margins varied by ownership type. Table 2 shows our estimates for the change in a SNF’s case-mix adjusted total nurse and RN time per resident day for each percentage point increase in its margin. For example, for fiscal year 2012, we estimated that if a for-profit SNF with a margin of 20 percent and a case- mix adjusted total nurse time of 4 hours per resident day increased its margin to 21 percent, holding all other variables constant, its total nurse time would fall to 3 hours and 49 minutes per resident day (a decrease of 11.0 minutes). The relationship between a SNF’s total nurse time per resident day and its margin also differed slightly by chain affiliation. We estimated that the total nurse time per resident day decreased slightly more at SNFs that were part of chains than at those that were independent. For example, we estimated that for each percentage point increase in a SNF’s margin, the case-mix adjusted total nurse time per resident day decreased by 6.9 minutes at chain-affiliated SNFs and by 4.3 minutes at independent SNFs in fiscal year 2014. For each of the 3 years we examined, a chain- affiliated SNF’s margin did not have a statistically significant effect on its RN time per resident day. Although the effect of margins in our regression analyses was statistically significant, margins were not the strongest predictor of case-mix adjusted total nurse and RN time per resident day. Accounting for the state where each SNF was located was very important in explaining its nursing time. This could be attributable to variation across states in staffing requirements, Medicaid reimbursement rates, or other factors. Because of the strong effect of the state where each SNF was located, we needed to statistically control for a SNF’s state to isolate the effects of a SNF’s total nurse and RN time per resident day on its margin. In addition, we estimated that a SNF’s proportion of Medicare days increased its total nurse and RN time per resident day. See appendix I for additional detail on the methods and results of our expenditure analyses. The collection of SNF expenditure data gives CMS the opportunity to provide information to the public on SNFs’ relative expenditures. Data that are readily accessible to the public and validated for completeness and accuracy to ensure reliability can contribute to SNF data transparency. However, public stakeholders have experienced difficulty accessing the data—including locating and using the data—and CMS efforts to ensure data accessibility and reliability have been limited. To improve the accessibility and reliability of SNF expenditure data, we recommend the Acting Administrator of CMS take the following two actions: 1. Take steps to improve the accessibility of SNF expenditure data, making it easier for public stakeholders to locate and use the data. 2. Take steps to ensure the accuracy and completeness of SNF expenditure data. We provided a draft of this report to HHS for comment. In its written comments, HHS concurred with our recommendation to improve the accessibility of SNF expenditure data. HHS disagreed with our recommendation that it take steps to ensure the accuracy and completeness of the SNF expenditure data. HHS said that it has made a reasonable effort to ensure the accuracy of the expenditure data, that the data are used for general purposes, and that the amount of time and resources that may be required to verify the accuracy and completeness of the data could be substantial and might not create significant benefit to the agency or the public. However, during the course of our work, CMS told us that the agency conducts only “extremely limited” reviews of the expenditure data due to resource constraints. Moreover, we found that CMS uses the expenditure data to update overall SNF payment rates, in addition to more general purposes. Therefore, we continue to believe that CMS should take steps to ensure reliable expenditure data that are accurate and complete. HHS’s comments on a draft of this report are reproduced in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, and the Acting CMS Administrator. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This appendix describes our methodology for examining (1) how skilled nursing facility (SNF) costs and margins vary by facility characteristics and (2) how SNF nurse staffing levels vary by facility characteristics and the relationship between SNF nurse staffing levels and margins. The appendix also provides further details of the results of our analyses. To examine how SNF costs and margins vary by facility characteristics, we developed cost categories, calculated the total costs for each category as a percentage of revenue, and made comparisons across SNF groups. We organized each SNF’s costs into four categories: direct care, indirect care, capital-related assets, and administrative services. Officials from the Centers for Medicare & Medicaid Services (CMS) said the categories we used included the appropriate expenses listed on the cost reports. Table 3 provides a crosswalk between the cost categories we used and the cost centers from the cost report. We then calculated each SNF’s costs as a percentage of revenue. We also computed each SNF’s margin, reflecting the percentage of revenue each SNF retained. While SNFs may be part of larger nursing homes that operate multiple lines of business (such as hospices, ancillary services, and home health care services), we focused our analyses on the SNF line of business. We compared SNF costs and margins by ownership type and chain affiliation. To examine how SNF nurse staffing levels vary by facility characteristics and the relationship between SNF nurse staffing levels and margins, we performed statistical analyses to identify factors associated with each SNF’s total nurse and registered nurse (RN) staffing levels. To measure nurse staffing levels, we calculated each SNF’s total nurse and RN time per resident day. A SNF’s total nurse time per resident day reflects the number of hours that RNs, licensed practical nurses (LPN), and certified nursing assistants (CNA) worked per resident day. We computed each SNF’s total nurse and RN time per resident day using a complex formula. We first calculated a SNF’s total paid hours for full-time and part-time RNs, LPNs, and CNAs who are both directly hired and under contract. Because the time needed for treating residents varies with their clinical conditions and treatments, we then adjusted each SNF’s nursing time per resident day on the basis of its Medicare residents’ health care needs. This process is known as case-mix adjustment. We developed our formula based largely on CMS’s methodology for case-mix adjusting nurse staffing measures for Nursing Home Compare, the CMS website that contains summary measures of SNF quality data. CMS’s approach is based on the distribution of a SNF’s residents’ assignments into one of 53 different payment groups, called resource utilization groups. Each group describes residents with similar therapy, nursing, and special care needs. CMS’s model uses the estimated RN, LPN, and CNA minutes for each resource utilization group based on the results from the Staff Time Measurement Studies conducted in 1995 and 1997. For our analyses, we used a different source of data than what CMS uses for Nursing Home Compare. CMS obtains staffing data from Form CMS- 671 (Long Term Care Facility Application for Medicare and Medicaid) from the Certification and Survey Provider Enhanced Reports (CASPER) system and census data from Form CMS-672 (Resident Census and Conditions of Residents). CMS officials advised us against using the CASPER data. CMS has observed that the CASPER data, which are collected over a 2-week period at the time of a SNF’s annual inspection survey, generally indicate higher RN staffing levels and lower LPN and CNA staffing levels compared with the Medicare cost reports. Table 4 shows CMS’s analysis of the staffing levels using 2013 data from the Medicare cost reports and CASPER. Because of the available data in the cost reports, there were some limitations with our case-mix adjustment calculation. While the cost reports include data on the resource needs for Medicare residents, they do not capture data on the resource needs for non-Medicare residents. Accordingly, we estimated a SNF’s resident case-mix based only on its Medicare residents’ resource utilization groups. In addition, the cost reports obtain data on 13 additional resource utilization groups that CMS implemented in 2010 to reflect updated staff time measurement data. For our calculation, we could not use Medicare days attributable to these groups on the cost reports. Finally, because SNFs’ staffing data on the cost reports were incomplete for fiscal year 2011 and were generally unavailable beyond fiscal year 2014 when we began our analyses, we limited our analyses to fiscal years 2012 through 2014. HoursAdjusted is the case-mix adjusted total nurse or RN hours per resident day. HoursReported is each SNF’s number of reported total nurse or RN hours per resident day. HoursExpected is each SNF’s number of expected total nurse or RN hours per resident day. HoursNational Average is the national average of reported total nurse or RN hours per resident day. We then used multiple linear regression analysis, a statistical procedure that allowed us to assess the relationship between a SNF’s margin and its case-mix adjusted total nurse and RN time per resident day, controlling for other factors. The other factors in our models included a SNF’s average hourly RN wage, average resident length of stay, chain affiliation, number of beds, number of competitors within 15 miles, ownership type, proportion of Medicare days, and urban or rural status. Our models also accounted for the state where each SNF was located. We performed separate regressions for all SNFs in fiscal years 2012, 2013, and 2014. We also performed regressions by ownership type and chain affiliation. Table 5 shows the results of our regressions for all SNFs where the dependent variable is the case-mix adjusted total nursing hours per resident day, and table 6 shows the results of our regressions for all SNFs where the dependent variable is the case-mix adjusted RN hours per resident day. The tables include regression coefficients and R2 statistics. Regression coefficients can be interpreted as the predicted change in nursing time per resident day for every unit change in the independent variable. In general, it is not meaningful to compare the size of these coefficients because our independent variables are on different scales. We used R statistics to estimate how much of the variation in the nursing time per resident day can be explained by all the independent variables in our models. Taken together, the independent variables explained between 27 percent and 31 percent of the variation in the case- mix adjusted total nursing hours per resident day and between 36 percent and 39 percent of the variation in the case-mix adjusted RN hours per resident day for fiscal years 2012 through 2014. Accounting for the state where each SNF was located was very important in explaining its nursing time per resident day. This could be attributable to variation across states in staffing requirements, Medicaid reimbursement rates, or other factors. In addition to the contact named above, Martin T. Gahart, Assistant Director; David Grossman, Analyst-in-Charge; Todd D. Anderson; and Jane Eyre made key contributions to this report. Also contributing were Elizabeth T. Morrison, Vikki Porter, Eric Wedum, and Jennifer Whitworth.
Medicare paid $28.6 billion to SNFs for nearly 1.7 million beneficiaries in 2014. About 15,000 SNFs provide short-term skilled nursing and rehabilitative care after an acute care hospital stay. As of 2014, 70 percent of SNFs were for-profit, 24 percent were nonprofit, and 5 percent were government-operated. About three-fifths of the SNFs were affiliated with chains. The average SNF Medicare margin was 12.5 percent. Some researchers have questioned whether SNF margins come at the expense of patient care in the form of low nurse staffing levels. GAO was asked to provide information on how SNFs spend their Medicare and other revenues. GAO examined (1) the extent to which the expenditure data CMS collects from SNFs and provides to the public are accessible and reliable, (2) how SNF costs and margins vary by facility characteristics, and (3) how SNF nurse staffing levels vary by facility characteristics and the relationship between SNF nurse staffing levels and margins. GAO analyzed Medicare cost report data for fiscal years 2011 through 2014, the most recent years with complete data available. GAO also interviewed CMS officials, researchers, and beneficiary advocates. The Centers for Medicare & Medicaid Services (CMS)—the agency within the Department of Health and Human Services (HHS) that administers Medicare—collects and reports expenditure data from skilled nursing facilities (SNF), but it has not taken key steps to make the data readily accessible to public stakeholders or to ensure their reliability. SNFs are required to self-report their expenditures in annual financial cost reports, and CMS posts the raw data on its website. However, CMS has not provided the data in a readily accessible format and has not posted the data in a place that is easy to find on its website, according to public stakeholders and GAO's observations. In addition, CMS does little to ensure the accuracy and completeness of the data. Federal internal control standards suggest that agencies should make data accessible to the public and ensure data reliability. Until CMS takes steps to make reliable SNF expenditure data easier to use and locate, public stakeholders will have difficulty accessing and placing confidence in the only publicly available source of financial data for many SNFs. GAO found that, for each fiscal year from 2011 through 2014, direct and indirect care costs were lower as a percentage of revenue, on average, at for-profit SNFs compared with nonprofit and government SNFs. Direct and indirect care costs were similarly lower at chain SNFs compared with independent SNFs. In addition, the median margin, which measures revenue relative to costs, was higher for for-profit and chain SNFs than for other SNFs in each of the 4 years. The relationship between SNFs' nurse staffing levels (hours per resident day) and their margins varied by ownership type in each fiscal year from 2012 through 2014, the 3 years with complete staffing data. For-profit SNFs generally had lower nurse staffing ratios than did nonprofit and government SNFs. Examining each fiscal year separately, GAO estimated that a SNF's margin had a small, but statistically significant, effect on its case-mix adjusted (that is, adjusted for residents' health care needs) nurse staffing ratios. For example, for each percentage point increase in a for-profit SNF's margin in fiscal year 2014, GAO estimated that the SNF's total nurse staffing ratio (including registered nurses, licensed practical nurses, and certified nursing assistants) decreased by 4.1 minutes per resident day after controlling for other factors. However, in GAO's analyses, these other factors, such as geographic location, were more important predictors of a SNF's case-mix adjusted nurse staffing ratios. GAO recommends that CMS (1) improve public stakeholders' ability to locate and use SNF expenditure data and (2) ensure the accuracy and completeness of the data. HHS concurred with the first but not the second recommendation, citing resource considerations. GAO continues to believe that CMS should provide reliable SNF expenditure data.
Partnerships, S-Corps, and trusts are commonly referred to as flow-through entities, as they do not generally pay taxes on income. Instead, they distribute net income—as well as losses—to partners, shareholders, and beneficiaries, respectively, who are subsequently required to report the net income or loss on their individual tax returns and to pay any applicable taxes. Distributed income is reported to IRS on a K-1, which is included in the annual return filed by the flow-through entity. Copies of the Schedule K- 1 are provided to partners, shareholders, and beneficiaries for use when filing their respective annual returns. Partners receive a Form 1065 Schedule K-1, “Partner’s Share of Income, Credits, Deductions, etc.”; shareholders receive a Form 1120S Schedule K-1, “Shareholder’s Share of Income, Credits, Deductions, etc.”; and beneficiaries receive a Form 1041 Schedule K-1, “Beneficiary’s Share of Income, Deductions, Credits, etc.” As shown in figure 1, as part of its overall underreporter program, IRS has a specific K-1 document-matching program in which selected K-1 information reported by flow-through entities is compared to information reported by individuals on their tax returns in order to determine whether distributed income has been reported as required. In like manner, income reported to IRS on a K-1 by S-Corps and trusts can be matched with income reported on tax returns by shareholders and beneficiaries, respectively. The purpose of this program is to increase voluntary reporting of flow-through income by taxpayers and to target K-1 related underreporter notices to noncompliant taxpayers. IRS identified about $4.1 billion in underreported income for tax years 2000 and 2001 via the K-1 matching program and assessed about $110 million in additional taxes. In addition to use in the matching program, IRS can also use K-1 information to aid in selecting flow-through entity returns for examination. For example, IRS can use K-1 information to aid in identifying flow-through entities involved in potential tax evasion schemes and to develop computer models that may enable IRS to more effectively select returns for examination with the greatest likelihood for a tax change. In order for IRS to use K-1 information in its matching program, the information must either be e-filed by a flow-through entity or, if filed via paper, transcribed by IRS staff for use in its computer systems. Currently, only partnerships with over 100 partners are required by law to e-file their annual returns, including any related K-1s. As a result, for tax year 2002, less than one-quarter of 1 percent of partnerships was required to e-file. Figure 2 illustrates that an e-filed K-1 goes through two basic steps before the information is input into the Information Returns Master File (IRMF). At Step A, the K-1 undergoes up-front checks prior to final acceptance by IRS, whereby the K-1 data must pass specific checks or the entire flow- through entity return is to be rejected until corrected by the entity. The up- front checks include verifying the tax year and proper formatting of names, addresses, and TINs. For example, the partner’s TIN on a K-1 filed by a partnership must be within a specific range established by IRS; if not, the entire partnership return is to be rejected. The only other step for an e-filed K-1 prior to its going through IRS’s document-matching program is the TIN validation process, in which the TIN and name on the K-1 are electronically matched with information in IRS’s files to determine whether the TIN is valid. Generally, this validation occurs several months after IRS accepts the e-filed return. In contrast, a paper-filed K-1 goes through several manual steps, including some of the up-front checks conducted electronically for e-filed K-1s, before TIN validation takes place and the information can be input into the IRMF. These steps, particularly transcription, can take up to 6 months to complete, with transcription beginning in May. For example, at Step 4, IRS staff are to edit the flow-through entity return and contact the taxpayer if a required K-1 is missing. At Step 8, IRS staff are to transcribe selected K-1 line items. During the transcription process, the computer conducts checks on select aspects of the keypunched data, such as correlating zip code and state information, and creates an error record for correction. Subsequently, other IRS staff are to compare a sample of the transcribed K-1 data to the original paper-filed K-1 to determine whether the data were accurately transcribed. The TIN on a paper-filed K-1, as on an e-filed K-1, is not computer validated until it reaches the stage where electronic TIN validation occurs, generally several weeks or months after the return was filed. IRS’s program to electronically validate TINs matches the TIN and name on the K-1 to taxpayer identity information in its files. If there is no match, IRS will attempt to “perfect” or correct an incorrect TIN/name combination via a TIN validation process, which entails matching the TIN and name control—the first four characters of an individual’s last name or the first four characters of a business name—with (1) a file which contains all Social Security numbers (SSN) ever issued and all name controls ever associated with them and (2) a file that contains all employer identification numbers (EIN) ever issued and all name controls associated with them. This TIN validation process occurs four times per year, beginning about a month and a half after the end of the filing season. Data transcription errors made by IRS on paper-filed K-1 data and invalid TINs submitted by flow-through entities on both paper-filed and e-filed K-1s lower the accuracy of K-1 data. IRS transcription errors, which occur only for paper-filed K-1s, ranged from 5 to 9.5 percent for tax year 2002, and IRS is taking steps to reduce these errors. The percentage of invalid TINs for e- filed K-1s is comparable to that for paper-filed K-1s. However, due to potential taxpayer burden and resource constraints, IRS is not notifying flow-through entities of invalid TINs so they can take corrective actions, a step which would likely give e-filing entities enough time to correct many invalid TINs before IRS runs its document-matching program. Paper-filing entities may not have sufficient time to correct invalid TINs before document matching occurs. According to IRS K-1 quality reviews conducted at two IRS locations, the overall K-1 transcription error rate for tax year 2002 ranged from 5 to 9.5 percent—errors that by definition are not made in e-filed returns. The most frequent errors dealt with names and addresses. IRS also found transcription errors in dollar amounts and TINs. Errors detected during quality reviews are corrected before the K-1s are posted to the IRMF, which IRS uses to detect potential underreporters and nonfilers. However, less than 2 percent of all K-1s are selected for the K-1 quality review. Transcription errors on all other K-1s are included when the data are posted to the IRMF. Consequently, data from an estimated 18 million tax year 2002 paper K-1s that were entered into databases used by IRS for research and enforcement purposes have transcription error rates from 5 to 9.5 percent. For example: IRS’s K-1 database for tax year 2002 included 16 paper-filed K-1s each of which showed interest income of over $1 billion. These interest income amounts appeared to be transcription errors. One partnership filing paper K-1s had 73 partners. For 72 of the partners, the K-1 interest recorded in the IRMF was under $200,000. The remaining partner’s interest as recorded in the IRMF was $85.3 billion. According to an IRS data quality review of tax year 2001 K-1 document- matching cases, about 5 percent of the cases that were either screened out before taxpayers were contacted or resulted in no change to taxpayers’ tax liabilities after an erroneous underreporter notice was sent to the taxpayer were due to transcription errors. The transcription errors included misplaced decimal points and positive money amounts that were transcribed as negative numbers and vice versa. According to IRS officials, it would be too costly to do more data transcription quality review of paper-filed K-1s, such as reentry of K-1 data. Instead, IRS is taking other measures to improve K-1 data accuracy. For example: For tax year 2003, IRS began scanning all K-1s using optical character recognition (OCR) equipment. Also, for tax year 2003, IRS is accepting K-1s with bar codes that contain all the K-1 data. If the bar code is present, the system will pick up the information from the bar code, otherwise the system will image the K-1 and read the line entries using OCR. Portions of the K-1 or bar code that cannot be read by OCR are manually transcribed. Although IRS originally projected 30 percent of K- 1s would be bar coded in tax year 2003, as of July 2004 only 8 percent of K-1s submitted were bar coded. For the 92 percent of K-1s without bar codes that OCR read, almost 20 percent required no transcription, 60 percent required less than half of normal transcription, and 20 percent were entirely transcribed. Although bar coding and OCR bypass most of the manual data transcription, which reduces some data transcription cost and errors, IRS officials still prefer e-filing because bar coding is a paper process with accompanying processing costs. To improve the accuracy of transcription, IRS has implemented new software and improved transcription training. At two IRS locations, IRS is using a new transcription software intended to increase transcription productivity and accuracy, compared to the current transcription software. In addition, transcription training for the K-1 program has evolved. Each year, feedback is funneled to the IRS transcription trainers to improve the K-1 transcription process. IRS is redesigning the K-1s for both partnerships and S-Corp returns so that IRS can scan them into the computer instead of having to transcribe the data manually. Although the redesigned partnership and S-Corp K-1s are expected to be ready by tax year 2004, the redesigned trust K-1 will not roll out until tax year 2005 because trust law makes the trust K-1 different from the other two K-1s. IRS is conducting educational outreach to increase accurate K-1 filing and provide updates to changes in K-1 design. In April 2004, IRS issued a news release to provide tips for businesses, individuals, and tax professionals on accurate K-1 filing. For example, flow-through entities are instructed to ensure the correct TINs are used on K-1s. In addition, the six IRS Tax Forums in 2004 include a session on reporting flow- through items, which addresses the redesign of K-1 forms and K-1 reporting reminders. IRS has also included updates on the K-1 matching program and K-1 redesign in external speeches to stakeholder groups. Finally, in late 2004, IRS plans to implement a multifaceted communication plan to publicize the release of the redesigned K-1s. For IRS to use K-1 data in its document-matching program, the TINs and names on K-1s need to be accurate so they can be linked to individuals’ tax returns and other tax documents. In tax year 2002, about 94 percent of 24 million K-1s that IRS processed contained valid TINs. The remaining 6 percent, or approximately 1.5 million K-1s, had invalid TINs because either IRS made transcription errors or the flow-through entities submitted invalid data. The 1.5 million K-1s with invalid TINs had combined income gains of $57.3 billion and combined income losses of $84.1 billion. IRS was able to correct the invalid TINS on about 750,000 of the K-1s, with income gains totaling $20.6 billion and incomes losses totaling $6.8 billion, so that they could be used in IRS’s document-matching program or for other compliance and research purposes. However, the remaining 740,000 K-1s with invalid TINs, with income gains of $36.6 billion and incomes losses of $77.2 billion, could not be perfected and thus were unmatchable. IRS did not have data on the number of K-1s that had either corrected or unmatchable TINs in the IRMF that resulted from transcription errors. After IRS checks the validity of TINs provided on K-1s, it does not notify either paper-filing or e-filing flow-through entities of the invalid TINs it finds so the entities can take steps to correct the TINs, due to concerns about the potential burden on the entities and resource constraints. Because e-filed returns do not go through time-consuming paper processing steps, including transcription, if IRS were to notify the originating entities of invalid TINs, they should have time to correct invalid K-1s before IRS performs its document matching in the fall following a tax filing year. For paper-filed K-1s, many entities likely could not respond before the document matching occurs. Because e-filed K-1s are not subject to transcription errors, none of the keypunching errors associated with paper returns are in e-filed data. However, as table 1 shows, in tax year 2002 the overall percentage of invalid K-1 TINs IRS found with its TIN validation program was comparable for e-filed (about 7 percent) and paper (6 percent). Factors that may be contributing to e-filed K-1s having TIN errors comparable to those of paper K-1s include (1) large partnerships, which are mandated to file K-1s electronically, submitting such large volumes of K-1s that many may unknowingly submit one or more K-1s with invalid TINs and (2) IRS not applying one of its up-front checks for e-filed partnership K- 1s. According to our analysis of IRS’s K-1 database, partnerships that submit a higher volume of K-1s are more likely to submit a K-1 with an invalid TIN compared to partnerships that submit only a few K-1s. In tax year 2002, e- filed partnerships’ K-1s had the highest rate of invalid TINs (8.7 percent). That same year, 97 percent of the partnerships with more than 100 partners, which are required to e-file, submitted at least one K-1 with an invalid TIN. In contrast, 18 percent of partnerships with 100 or fewer partners submitted at least one K-1 with an invalid TIN. To encourage electronic filing of partnership returns, IRS is not applying its up-front check that would reject an e-filed partnership’s return if it has even one TIN on a K-1 that falls outside the range of numbers associated with SSNs and EINs. If IRS applied this validation criterion in tax year 2002, 12 percent of the e-filed partnership K-1s with unmatchable TINs would have been rejected and the originating entities would have been asked to take steps to correct the TINs. However, some partnerships have hundreds or thousands of partners, making it more challenging for them to ensure that all partners’ TINs are correct. IRS officials have determined that accepting an e-filed return when the vast majority of the K-1 TINs fall within the range of numbers associated with SSNs and EINs, rather than rejecting the entire entity return due to one or a few TINs that fall outside that range, promotes e-filing. In addition, IRS does not notify either e-filing or paper-filing flow-through entities if submitted TINs are found to be invalid during the TIN validation checks it performs subsequent to accepting entities’ returns. In tax year 2002, more than half of the K-1s submitted by 2 percent of the flow-through entities contained invalid TINs. The total number of unmatchable K-1s submitted by these entities represented about 29 percent of the total number of K-1s with unmatchable TINs. IRS officials said that requiring flow-through entities to correct invalid TINs could be a burden because the entities rely on information supplied by individual taxpayers and the correct TINs may not be readily available, particularly for those entities submitting a large number of K-1s. In contrast, IRS does notify filers of missing or invalid TINs submitted on other types of information returns, which then may require the filers to contact third parties for corrected information. For example, for tax years 2000 and 2001 combined, IRS proposed just over $204 million in penalties against nonfederal payers for information returns with invalid TINs. IRS officials acknowledged that flow-through entities may have made mistakes themselves that resulted in invalid TINs or may have the correct information on hand. They also stated that sending such notices would entail some additional cost to IRS and that they currently face resource constraints. However, IRS officials do not have estimates of either the potential benefits, such as increased revenue obtained from document matching utilizing accurate TINs, or the cost to IRS of obtaining valid TINs from flow-through entities. If IRS were to notify flow-through entities of invalid TINs and ask that they take steps to correct the TINs, it likely would be able to receive many corrected TINs, particularly from e-filers, in time for its annual document- matching program. IRS does its document matching generally from November of the calendar year through January of the following year. The time when document matching occurs changes somewhat from year to year. IRS corrects TINs, including K-1 TINs, four times a year: at the end of June, early September, mid-November, and late November. Based on the IRS’s 2001 Statistics of Income samples, at least 97 percent of partnerships and S-Corps filed calendar year returns. Consequently, all of these returns were due to be filed prior to IRS’s first TIN validation check in June. Since IRS accepts e-filed returns within 2 days of their submission, all e-filed returns for which filers have not requested extensions should be available for IRS’s June TIN validation program. In this case, IRS would be able to notify the flow-through entities of the invalid TINs and the entities would have several months to correct the TINs and get them back to IRS before IRS posts the corrected K-1s to the IRMF in time for use in the document- matching program. Even if a flow-through entity did not submit the corrected TIN in time, the entity would be aware of the error and could correct the TIN for the following year. For paper-filing flow-through entities, fewer entities likely would be able to correct invalid TINs in time for inclusion in the document-matching program. Transcription of paper-filed entity returns, including K-1s, begins in May. Because transcription can take up to 6 months, a significant portion of paper-filed entity returns and associated K-1 TINs likely would not be available for the June TIN validation. For those not available until the early September TIN validation, the entities would have much less time to correct TINs and get back to IRS in time for IRS to include the corrected TINs in the document-matching program. Because IRS’s new return scanning and bar-coding efforts should make paper-filed return data available more quickly, IRS may be able to include more of them in the June TIN validation and thus provide entities sufficient time to provide corrected TINs if it sends notices of invalid TINs to flow-through entities. In addition to using K-1 data in its document-matching program, IRS is using K-1 data in its research programs to better understand flow-through relationships. When data such as TINs are unavailable or inaccurate, researchers are unable to establish a complete understanding of the network of related entities and taxpayers. Data limitations have also affected IRS’s efforts to identify potentially noncompliant taxpayers for examination. IRS researchers and examination staff indicated that more complete and accurate data would enhance their efforts to detect noncompliance. IRS researchers are using K-1 data to visualize how taxpayers are related to different entities and to evaluate whether compliance issues may exist with flow-through entities. However, inaccurate TINs have sometimes prevented researchers from establishing all relevant links in a network of related entities. As a result, IRS is less able to track the flow of income and losses among entities and could be missing opportunities to address areas of noncompliance. Figure 3 illustrates how inaccurate TINs may prevent IRS from tracking the flow of income through a chain of financial transactions. In this example, an S-Corp distributes losses to an individual shareholder, possibly to allow the shareholder to offset other gains, and distributes income to another shareholder, a trust. Since trusts are flow-through entities and may be nontaxable, the individual shareholder may be using the trust to reallocate income (perhaps to someone in a lower tax bracket) that would otherwise need to be reported and taxed on that individual’s return. In our example, the S-Corp transfers income to Trust A, which in turn transfers the income to Trust B. In both transactions, the S-Corp and Trust A submit K-1s with accurate TINs to IRS, so IRS can track the flow of income between the entities. Trust B then transfers the income again to Trust C and submits a K-1 with an inaccurate TIN to IRS. Because of the inaccurate TIN on the K-1, IRS would likely be unable to identify that Trust C is related to the other entities or track the flow of income to its final destination and ultimately determine whether any income was underreported. IRS transcribes limited line items from K-1s that accompany partnership and S-Corp returns. According to IRS staff, at least some of the nontranscribed lines would provide useful information. Similarly, IRS does not transcribe many of the lines from the flow-through entity’s return to which the K-1s are attached. Since e-filing of the full entity’s return is part and parcel of achieving e-filing of K-1s, e-filing of K-1s would have the additional effect of making the complete entity’s tax return information available to IRS examiners and researchers. Complete entity data also would provide useful information for research and examination purposes. For K-1s, IRS identified the line items to be transcribed based on the needs of its document-matching program and not on other potential uses. Regarding K-1s, IRS transcribes about 14 percent of partnership K-1 line items and about 17 percent of S-Corp K-1 line items. Research and examination staff indicated that the nontranscribed information would provide useful information. For example: The “Other Income/(Loss)” line is not captured from the K-1 because it is not useful for document matching, but it can be helpful to researchers in identifying abusive shelters, for example, where the gain is allocated to a tax haven country and the loss is allocated to a domestic investor. Transcribing the shareholder’s ownership percentage from the K-1 would more easily allow classifiers to determine if the taxpayer has a controlling interest in the S-Corp and if income/losses are distributed evenly. Regarding the flow-through entity’s return, IRS currently transcribes about 23 percent and 20 percent of the line items for partnership and S-Corp returns, including the K-1, respectively. As discussed below, additional data from the full entity return would potentially benefit examination and research programs. For example: The IRS Examination Guide for Abusive Tax Shelters and Transactions lists partnership (Form 1065) and S-Corp (Form 1120S) tax return lines that when examined with other information, may indicate tax shelter transactions. For partnership returns, about 80 percent of the line items listed in the guide as useful to detect tax shelters are not captured in IRS’s database. For S-Corp returns, about 87 percent of line items listed are not captured. When selecting returns to be examined, IRS classifiers who focus on tax shelter issues lack information that may identify taxpayers most likely to be using tax shelters. IRS researchers developed a computer model to better identify S-Corp returns for examination by helping classifiers separate accurate returns from those needing further investigation. The model uses data from IRS’s database of business returns, including Form 1120S and accompanying schedules. From the approximately 23,000 returns that the model analyzed, IRS identified about 58 percent of the returns as having low potential for noncompliance and therefore eliminated them from the universe that might be examined. The remaining 42 percent of returns could not be classified because the model did not have enough data to evaluate them. As a result, IRS has continued to rely on examination staff who focus on S-Corp compliance issues to manually review the returns that the model has not been able to classify. To help address the lack of 1120S data, IRS will be capturing 10 additional line items from the 1120S for tax year 2003. One IRS researcher estimated that the 10 additional 1120S line items would enable the computer model to identify 15 S-Corp compliance issues compared to its current capability of identifying 2 compliance issues. From our file review of closed S-Corp and partnership tax return examination cases and discussions with IRS examination and research staff, we also found that additional line items from the K-1 and other parts of the entity’s return may assist IRS in selecting tax returns to examine. Based on our sample of closed examination cases, in at least 40 percent of the examinations, IRS corrected line items that are currently not transcribed. IRS examination and research staff we interviewed indicated that if IRS captured this information and made it available to them, it would help them identify those returns with errors or omissions that IRS should examine. Most of the nontranscribed line items that were corrected were from Schedule A (Cost of Goods Sold) and Schedule K (Partners’ Shares of Income, Credits, Deductions, etc., or Shareholders’ Shares of Income, Credit, Deductions, etc.) for both partnerships and S-Corps. IRS identified two of these line items, both from Schedule K and K-1, as important for improving the effectiveness of computer modeling and mentioned other nontranscribed lines, such as “Short Term Capital Loss,” from the entity return that would be useful. Increasing e-filing of K-1s provides benefits and challenges for IRS and taxpayers. The benefits for IRS are faster and more comprehensive information as well as cost reductions. The benefits to taxpayers are the receipt of acknowledgment notices, faster rejection notices that allow taxpayers to resolve problems faster, and more accurate information. Currently IRS’s main challenge is the lack of complete e-filing capacity, but IRS is scheduled to have this capacity by 2007. The main challenge for taxpayers is the cost of converting from paper to e-filing. However, limited data indicate that most K-1s are computer generated, which is a prerequisite for e-filing. Also, all of the software companies that offer e- filing that disclosed their fees (about half of those we contacted) do so for less than a dollar per K-1 or for no additional charge. Congress has mandated that IRS increase e-filing to at least 80 percent of all tax and information returns by 2007. Currently, both IRS and Congress are considering increasing mandatory e-filing of flow-through entity returns. Currently, IRS electronically receives about a quarter of the K-1s filed, although only partnerships with more than 100 partners are mandated to e- file. Increasing e-filing of K-1s would benefit IRS because of the following: E-filing K-1s provides IRS with faster and more complete information for use in compliance and research programs. A recent Treasury Inspector General for Tax Administration (TIGTA) report stated that the savings in processing time resulting from e-filing would significantly affect IRS’s attempt to reduce its lengthy corporate examination process. In addition, the TIGTA report stated that comprehensive electronic information would minimize the number of no change audits by enabling IRS to better target resources to issues that have the greatest compliance risk. E-filing K-1s would save IRS millions of dollars a year because it would eliminate the processing and transcription costs of paper K-1s. According to IRS, the cost to process e-filed K-1s is minimal once the systems are in place, while processing and transcribing paper K-1s cost IRS $14.6 million in fiscal year 2001 and $13.1 million in fiscal year 2002. If IRS was able to re-allocate these cost savings, IRS could, for example, pay the salaries of 284 additional field collection revenue officers. While, as noted earlier, some of the processing and transcribing costs will be reduced because of bar coding and scanning, IRS regards bar coding as a lesser alternative to e-filing. In addition, bar coding results in incomplete information because only the transcribed lines are scanned into the computer systems, and the K-1s are the only part of the entity return that is bar coded. Also, there is limited availability of software that has bar-coding capacity; only four software companies provide bar- coding capability compared to 19 software companies that provide e- filing. The primary benefits for taxpayers of increasing e-filing are as follows: Taxpayers that e-file will receive electronic acceptance or rejection notices within 2 days of submitting tax returns. The tax form is electronically transmitted to the software company and then the software company transmits the tax return to IRS. IRS sends the software company an electronic acceptance or rejection notice within 2 days, and the software company then sends the notice to the taxpayers. Taxpayers that file paper returns do not receive acceptance notices and thus do not have proof that the returns were filed on time in case the tax returns are lost. E-filing taxpayers also receive rapid rejection notices and are thus informed of problems much faster than paper-filing taxpayers, who may wait for 6 months for IRS to process tax returns. The information on an e-filed tax return should be more accurate because of the lack of IRS transcription errors. More accurate information would reduce the potential for burdensome taxpayer notices resulting from transcription errors. To respond to IRS notices, taxpayers and preparers are required to collect, organize, and submit information to IRS to explain any discrepancies cited in the notices, which requires an investment of both the taxpayers’ time and money. In recent reports, TIGTA noted that e-filing would eliminate transcription errors that result in erroneous and burdensome taxpayer notices. The primary challenge for IRS of mandating increased e-filing is to implement computer systems that can electronically process the complete set of tax documents that flow-through entities may file with K-1s. Although IRS currently has the capacity to electronically process K-1s that accompany flow-through entity returns, IRS is unable to electronically process all the forms that accompany those of trusts and partnerships. This impedes e-filing of the flow-through returns with accompanying K-1s because taxpayers that submit partnership and trust returns (that include three-fourths of K-1s) have to submit both paper and electronic documents—a disincentive for e-filing. For example, signature forms have to be sent in on paper. In contrast, IRS currently has complete e-filing capacity for the entire S-Corp return, so no forms have to be filed on paper. IRS is scheduled to have complete e-filing capacity for partnerships and trusts, but has pushed the completion date for this effort from 2006 to 2007 due to limited resources. The main challenge to expanded use of e-filing for taxpayers is the cost of converting from paper filing to e-filing. In separate reviews of flow-through entity returns by IRS and GAO, the majority of the tax returns were found to be computer generated, prepared by a paid preparer, or both, which might make the conversion to e-filing easier. Based on our sample of Audit Information Management System agreed closed case examinations of partnerships and S-Corps with tax years ending in 2000 or 2001, paid preparers prepared at least 84 percent of the returns, and at least 90 percent of the returns were computer generated. Of the nonprojectable sample of 200 partnership and trust returns that IRS reviewed, a paid preparer prepared 169 returns, and 173 were computer generated. Since the above-mentioned reviews of flow-through entity returns indicate that a paid preparer prepares the majority of returns that accompany K-1s by computer, the cost to convert to e-filing may be marginal or nonexistent. If a paid preparer is using software that has e-filing capacity, then taxpayers can simply choose to use this option, which can entail a marginal cost increase. According to our survey of the software companies that offer e- filing of partnership, trust, and S-Corp returns, all of the software companies that disclosed their fees (about half of those we contacted) either charge from $0.30 to $0.90 per e-filed K-1 or include the option to e- file in the price of the software. In order to e-file, flow-through entities have to buy the software and e-file the entire flow-through entity return, at costs that vary from $3.50 per return to over $15,000 for comprehensive support for partnership returns, corporate income tax returns, and affiliated forms. For partnerships, if the paid preparer does not use software with e-filing capacity and the data are formatted according to IRS’s specifications, paid preparers can send the partnership return electronically to a software company that will then electronically transmit the partnership return. One software company stated that it would generally charge $0.40 per K-1. According to IRS officials, IRS is considering mandating increased e-filing of information and tax returns, including those of flow-through entities. In recent reports, TIGTA has recommended that IRS should work with the Department of the Treasury to mandate increased e-filing of flow-through entity returns, either through current regulatory provisions or through legislative action. As a result, IRS is currently studying the possibility of increasing mandated e-filing of flow-through entities’ returns with accompanying K-1s under Internal Revenue Code (I.R.C.) Section 6011 as part of an agencywide initiative to increase e-filing to meet a congressionally mandated goal of having at least 80 percent of all tax and information returns filed electronically by 2007. IRS’s study includes the cost for taxpayers to convert from paper filing to e-filing, the cost for IRS to initiate and administer increased mandated e-filing, the perspective of the paid tax preparer and business communities, and how to implement increased mandated e-filing. In addition, according to IRS officials, IRS is also considering mandating e-filing for those returns for which IRS has complete e-filing capacity. In addition, Congress is currently considering the Tax Administration Good Government Act of 2004, which would permit IRS to mandate increasing e-filing of flow-through entity returns and accompanying forms, such as K- 1s, in two new ways. First, the law would remove the present restrictions in I.R.C. Section 6011 that prevent IRS from mandating individuals, estates, and trusts to e-file. Since the law would remove the restriction on mandating e-filing of individuals, IRS would then be able to mandate e- filing by paid preparers that prepare individual tax returns. Second, the law would lower the threshold at which IRS could mandate e-filing of information and tax returns for any taxpayer to 5 returns. Currently, the threshold is 250 returns. Thus, IRS could mandate e-filing by paid preparers who file 5 or more flow-through entity returns or individual tax returns. Although there are some costs to taxpayers to e-file and to IRS in processing e-filed flow-through entity returns and related K-1s, in general e- filed K-1s offer substantial advantages for both IRS and taxpayers. We are not making a recommendation for further action to expand e-filing of flow through-entities’ returns, including K-1s, because IRS agreed to take steps to do so pursuant to a TIGTA recommendation and is currently studying the costs of increasing e-filing to IRS and taxpayers. One step, upgrading its overall capability to accommodate an increase in e-filed flow-through entity returns, including K-1s, is under way. However, we are concerned that IRS’s estimated date for having this capacity has been pushed back to 2007 due to limited resources. The sooner this can be accomplished, the sooner IRS can reap the potential benefits of an increase in e-filed Schedule K-1s while moving closer to achieving the congressionally mandated goal of having 80 percent of all federal tax returns and information returns filed electronically by 2007. Regardless of whether e-filing is expanded, IRS is missing an opportunity to improve the accuracy of TINs associated with K-1s and thereby is undermining the benefits that can be realized from its document-matching program, efficient targeting of examination resources, and new research to identify noncompliance. Although IRS officials expressed concern about the possible burden on flow-through entities of dealing with TIN error notices and about IRS’s ability to deal with the costs of sending such notices given its resource constraints, IRS does not have information on the likely benefits and costs of sending TIN error notices to flow-through entities. Given the high concentration of TIN errors among a small portion of flow-through entities, even if costs are high compared to the benefits of sending notices to some flow-through entities, the situation may be much different for error-prone flow-through entities. To improve the availability and usefulness of Schedule K-1 data to IRS for detecting noncompliance, we recommend that IRS conduct a pilot study to determine the benefits and costs of requiring flow-through entities to correct invalid TINs on K-1s as soon as it has been determined that the TINs cannot be “perfected” via IRS’s TIN validation program. We received written comments on a draft of this report from the Commissioner of Internal Revenue, which are reprinted in appendix II. The Commissioner agreed with our assessment of Schedule K-1 TIN accuracy and that a pilot project would be useful in identifying ways to improve TIN accuracy. He said that IRS plans to study a number of options to ensure that TINs included on Schedule K-1s are accurate, including our recommendation that IRS conduct a pilot study to determine the benefits and costs of obtaining corrected TINs from flow-through entities. The Commissioner said that IRS’s Flow-Through Compliance Committee recently initiated a project to study invalid TINs on Schedule K-1s to determine their potential compliance impact. In addition, he also mentioned other initiatives, such as form redesign, outreach efforts, and scanning Schedule K-1s, to improve the overall effectiveness of flow- through compliance efforts. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the Chairman and Ranking Minority Member, House Committee on Ways and Means, and the Chairman and Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means. We will also send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-9110 or brostekm@gao.gov or Jonda Van Pelt at (415) 904-2186 or vanpeltj@gao.gov. Key contributors to this report were Ralph Block, Maya Chakko, Keira Dembowski, Elizabeth Fan, Robert McKay, and Samuel Scrutchins. Our objectives were to (1) evaluate the accuracy of K-1 data used by the Internal Revenue Service (IRS), specifically transcription errors and invalid taxpayer identification numbers (TIN); (2) determine whether any limitations in the availability or accuracy of K-1 data have affected IRS’s ability to identify noncompliance; and (3) describe the benefits and challenges of increasing electronic filing of K-1s. To evaluate the accuracy of K-1 data used by IRS, we requested, obtained, and analyzed data from IRS’s K-1 database for tax year 2002. We examined two versions of the database, both of which had been modified from the original K-1 database by IRS research analysts. One, called the K-1 “cleaned database,” has original K-1s removed when possible where duplicate or amended K-1s for the same taxpayer were subsequently submitted by the parent flow-through entity. The second, called the “money-cleaned database,” also has all amounts that were obvious transcription errors removed. Generally, these were amounts in excess of $900 million and that exceeded the total amount reported on the parent flow-through entity’s Schedule K for the particular line item. We used the “cleaned database” to identify one such transcription error. We analyzed the “money-cleaned” database to identify the number of K-1s that were filed with inaccurate TINs by type of flow-through entity (partnership, subchapter S corporation (S-Corp), or trust) and by type of submission (e-filed versus paper filed). We subsequently analyzed this subset of K-1s to determine the number and total income of K-1s with invalid TINs that IRS (1) was able to perfect via its TIN Perfection Program and (2) could not perfect and thus remained invalid and, in effect, unusable for compliance or research purposes. Because specific data were unavailable from the K-1 database concerning transcription errors, we conferred with IRS analysts to identify the type of transcription errors found during K-1 product reviews they conducted from July through November 2003. To identify whether any limitations in the availability or accuracy of K-1 data have affected IRS’s ability to identify noncompliance, we obtained from IRS the line items the agency transcribes from the K-1 and related flow-through entity returns. When we calculated the percentage of line items transcribed from the entity return, we included the K-1 as part of the return. To count line items, we included all labeled lines and sub lines, but excluded certain fields, including calendar or tax year, name and address, supplemental information/attachments, signature and date, preparer’s signature and date, preparer’s self-employment, and preparer’s firm name and address. We also interviewed IRS examination and research staff, as well as outside research consultants from the MITRE Corporation, with whom IRS contracted to analyze flow-through entities. Specifically, we discussed how IRS currently uses K-1 data to select flow-through entity returns for examination, how IRS research staff and research consultants are using K-1 data to develop analytical tools to aid IRS in better targeting returns for examination, and how data limitations affect their ability to effectively use K-1 information. To determine the compliance issues IRS identified and the related line items that were adjusted, we reviewed a stratified probability sample of partnership and S-Corp tax returns. We selected these returns from the population of 253 partnership and 1,121 S-Corp agreed closed examination cases listed in the IRS Audit Information Management System 2002 Closed Case database with tax years ending in 2000 or 2001. We reviewed a sample of 107 returns of which 91 returns, consisting of 52 of the partnership and 39 of the S-Corp returns, could be analyzed. The remaining 16 returns could not be analyzed, generally because the examination workpapers were not available or the case adjustments were based on unusual circumstances, such as amended return submissions from taxpayers. We used this sample of 91 returns to estimate several characteristics of this population of all 1,374 agreed partnership and S-Corp cases. Because these estimates are based on a probability sample, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a one-sided 95 percent confidence interval. For example, paid preparers prepared an estimated 93 percent of the returns, and a one-sided 95 percent confidence interval for this estimate has a lower bound of 84 percent. Since the actual population value would be contained in this interval for 95 percent of the samples we could have drawn, we are 95 percent confident that the proportion of paid preparer returns in the study population exceeds 84 percent. Similarly, for the adjusted lines found in the file review, we are 95 percent confident that adjustments made to nontranscribed line items occur in at least 40 percent of the examinations. We subsequently discussed our file review findings with IRS research and examination staff to obtain their views regarding whether having additional K-1 data available, such as line items not currently transcribed, would increase their ability to identify returns with compliance issues. To describe the benefits and challenges of increasing e-filing of K-1s, we discussed this issue with IRS officials and officials from seven organizations that represent the taxpayer community. We selected the organizations based on prior GAO knowledge and referrals from some of the organizations that we contacted. From IRS officials, we obtained estimates of the cost to transcribe K-1 information, to help identify the potential cost savings if K-1s were e-filed. We also discussed IRS’s current requirements for mandating e-filing K-1s and IRS’s experience in enforcing these requirements, and obtained data on penalties levied for failure to e- file required K-1s. Finally, we discussed IRS’s current and future ability to electronically process an increase in the number of e-filed flow-through entity returns, including K-1s. We also contacted all 19 of the software companies that offer e-filing for flow-through entities and received e-mail responses from just over half of the companies. From officials with the software companies, we obtained their current fees for preparing and e- filing flow-through entity returns and K-1s. To determine how many flow-through entities filed on a calendar year basis we used the 2001 Partnership and Corporation Statistics of Income (SOI) samples. The SOI partnership data we used included the entire sample, but the SOI corporation data we used were limited to the flow-through S- Corps. Because these are probability samples, the SOI estimates are subject to sampling error. We produced estimates from these samples using SOI’s sampling weights and methods that are appropriate for stratified probability samples. In this report we present these estimates as intervals, reporting the lower bound on one-sided 95 percent confidence intervals. We did our work at IRS headquarters in Washington, D.C., as well as at the Ogden, Utah, Processing Center and the Oakland, California, Area Office from April 2003 through July 2004 in accordance with generally accepted government auditing standards. We assessed whether the information contained in the K-1 databases, the two SOI databases, and the Audit Information Management System (AIMS) database were sufficiently reliable for the purposes of this report. We ensured that the copies of all five databases we received from IRS were identical to the original databases based on record counts and analyses of control totals, comparison to published data, or both. In addition, we performed electronic tests on the each database to search for missing data and obvious errors. For the original K-1 database from which the two cleaned versions we used originated, we assessed IRS’s procedures for processing and transcribing Schedule K-1 data. We also assessed other procedures and methodologies IRS research analysts used to remove duplicate records and obvious errors from transcribed monetary fields. We anticipated the K-1 database would have some reliability issues because our engagement was designed in part to assess the sufficiency of the data transcription effort. While large monetary transcription errors were removed from the “money-cleaned” database, additional, undetectable transcription errors of amounts within normal ranges may remain. For the AIMS 2002 Closed Case database, we relied exclusively on variables that allowed us to identify agreed closed case partnerships and S- Corps, as this was the population of cases from which we drew our sample. We interviewed IRS personnel who manage the AIMS databases and found that groups in IRS conducting examinations are required to validate annually that completed examination cases are actually shown as having been closed. In addition, we collected data from original returns during our data collection effort and compared those data to data contained in the AIMS database. We found no indication that our sample contained ineligible cases. SOI samples are widely used for research purposes. We have documented for recent reports that IRS performs a number of quality control steps to verify the internal consistency and completeness of SOI sample data. The agency uses similar quality control procedures for all types of SOI samples. For example, the agency performs electronic tests to verify the relationships between values on the returns selected as part of the SOI samples and manually edits data items to correct for problems, such as inaccurate and missing items. Because we used the partnership and corporate samples only to determine the percentage of partnerships and S- Corps that were calendar year filers, we needed no more than four variables from each database to make this analysis. We checked these variables for completeness and accuracy and found no missing or out of range values. On the basis of our data reliability reviews of the five IRS databases, we believe all five contain data that are sufficiently reliable for the purposes of this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Over a trillion dollars in income was distributed in tax year 2002 by flow-through entities, such as partnerships, subchapter S corporations, and trusts, to their partners, shareholders, or beneficiaries, respectively. The Internal Revenue Service (IRS) estimates that from 6 to 15 percent of such income is unreported on individual tax returns. This income is reported to both IRS and to the recipients on a Schedule K-1 (K-1). IRS uses K-1 data in its document-matching program to identify noncompliance and for other purposes. GAO was asked to (1) assess the accuracy of K-1 data, specifically transcription errors and taxpayer identification numbers (TIN); (2) determine whether any limitations in the availability or accuracy of K-1 data have affected IRS's ability to identify noncompliance; and (3) identify the benefits and challenges of increasing e-filing of K-1s. The accuracy of paper-filed K-1 data is reduced by transcription errors; paper and e-filed K-1s have inaccurate TINs. IRS estimates that transcription errors for tax year 2002 ranged from 5 and 9.5 percent and is taking steps to address such errors. Although e-filed K-1s do not require transcription, for tax year 2002, the percentage of invalid TINs for e-filed K-1s and paper-filed K-1s were comparable (7 and 6 percent, respectively). Due to potential burden on flow-through entities and resource constraints, IRS does not notify the entities of invalid TINs on K-1s for correction. If IRS did so, this would likely give e-filing entities enough time to correct invalid TINs before IRS runs its document-matching program. Inaccurate or limited K-1 data have created problems for IRS researchers and examiners. IRS research staff indicated that inaccurate TINs adversely affected their analysis of flow-through entity networks. Further, because IRS captures limited data from flow-through entity returns, including the K-1, IRS staff lack data they consider helpful, such as "Other Income" to help identify tax shelters. In at least 40 percent of closed examination cases we sampled, IRS examiners found errors with return line items not entered into IRS's databases when returns are received. If these lines were available up front, researchers say they would be able to better identify returns with potential noncompliance. Increased e-filing of K-1s would provide benefits and challenges to both IRS and taxpayers. Benefits for IRS include faster, more complete information and millions in annual cost reductions. Benefits for taxpayers include fewer IRS contacts with them because IRS would have more accurate information in its systems. The primary challenge for IRS is its current inability to electronically process all flow-through entity returns and related forms, including the K-1. For taxpayers, the primary challenge is the cost of converting from paper to e-filing.
Veterans aged 65 or older are increasing both in number and in the percentage of the veteran population receiving VA health care services. More significantly, the number of veterans aged 75 and older, the heaviest users of nursing home care, is increasing rapidly. VA estimates that the number of veterans aged 75 and older will increase from about 2.6 million in 1995 to about 4.0 million in 2000. All veterans with a medical need for nursing home care are eligible to receive such care in VA nursing homes and community nursing homes under contract to VA. VA also pays a portion of the cost of care for veterans served in state veterans nursing homes. Because most veterans receive care financed through other government programs (Medicare or Medicaid), private insurance, or personal assets, however, these VA programs provide only a portion of the nursing home care that veterans receive. VA serves veterans essentially on a first-come, first-served basis up to the limits of VA’s budget authority for nursing home care. VA is authorized to pay for care in community nursing homes for a period generally not longer than 6 months for nonservice-connected veterans and for an indefinite period for veterans with service-connected conditions. No maximum service period exists, and only higher income, nonservice-connected veterans must contribute to the cost of their care in VA nursing homes. State veterans homes establish their own admissions policy, and, although they receive per diem payments from VA, state homes generally rely on patient cost sharing to help cover expenses. VA operates 129 VA nursing homes (in 45 states), contracts with 3,766 community nursing homes (in all 50 states, the District of Columbia, and Puerto Rico), and pays a portion of the costs for veterans served in 80 state veterans homes (in 38 states). Obligations for VA and state veterans nursing homes have increased each year from 1985 through 1995; obligations for community nursing homes have fluctuated over the same period. Overall, VA reports that nursing home obligations have grown from about $710 million in 1985, serving 72,889 veterans, to $1.6 billion in 1995, serving 79,373 veterans as shown in table 1. To control construction of VA nursing homes and encourage placement of veterans in less costly community nursing homes, the Office of Management and Budget (OMB) established guidelines in 1987 for both the market share (16 percent) of the estimated demand by veterans for nursing home care and the distribution of veterans in the various types of facilities. The patient distribution goal is 30 percent in VA nursing homes, 40 percent in community homes, and 30 percent in state veterans homes. Management of nursing home resources in VA is changing with the reorganization of the VA health care system. The reorganization involves trimming unnecessary management layers, consolidating medical services, and using more community resources. Called the Veterans Integrated Service Network (VISN), the reorganized VA health care system will be administered and provided through 22 local network service areas and encompass the assessment, planning, and budgeting aspects of providing VA nursing home care in each service area. Implementation of the VISN will shift nursing home resource management decisions from individual VA medical centers to VISN directors. VA’s transition to the VISN was in its early stages at the time of our review. The distribution of veterans in the three types of nursing homes differs greatly from VA’s target of 30 percent in VA homes, 40 percent in community homes, and 30 percent in state homes. Figure 1 shows the distribution based on the average daily census during fiscal year 1995. Appendix I shows, by state, the number of nursing homes that VA uses and the average daily census by each type of facility in fiscal year 1995. The distribution pattern has also shifted considerably over the years. In fiscal year 1985, for example, about 40 percent of veterans (figure based on the average daily census) receiving VA-financed or -provided nursing home care were cared for in community nursing homes. Also, VA’s average daily census in community nursing homes was over 3,000 patients greater in 1985 than in 1995. Figure 2 shows the average daily census in the three types of nursing homes from fiscal years 1985 through 1995. The distribution shift was the result of major reductions in expenditures for community nursing homes that occurred in the late 1980s and early 1990s. For example, in 1989 VA delegated community nursing home budget decisions to VA medical centers. Some medical center directors left their programs intact, while others used community nursing home funds for other medical center activities. Most community nursing home programs shrank considerably. According to a VA official, in fiscal year 1990, VA reversed its decision to delegate budget authority to medical centers because VA medical centers did not support the community nursing home program. VA’s use of community nursing homes has not returned to pre-1989 levels, however. From fiscal years 1988 to 1993, the average number of community nursing homes under contract to each VA medical center decreased from 24 to 21, and the average number of veterans placed in community homes by each VA medical center decreased from 183 to 129. Community nursing home funds have also been used to meet VA budget emergencies and to fund VA-sponsored noninstitutional care programs. According to VA budget documents, in fiscal year 1992, $35 million of VA’s community nursing home program budget was reprogrammed to meet the increased costs of special pay rates for physicians and dentists. According to a VA official, the reprogramming of these funds was cleared by OMB and VA appropriations committees. Also, in fiscal year 1993, VA’s Homemaker and Day Care programs, alternatives to institutional care, began to share community nursing home budget resources. This action was also supported by the Congress through language in VA’s appropriations bill designed to increase VA’s use of long-term care alternatives. For fiscal year 1995, VA obligated $1.1 billion for VA nursing home operations, about $361 million for community nursing homes, and about $166 million for state veterans homes. According to VA-reported costs for fiscal year 1995, VA’s daily patient cost was $213.17 for veterans in VA nursing homes, $118.12 for veterans in community nursing homes, and $35.37 for veterans in state veterans homes (where only a portion of costs are funded by VA). Actual costs are unknown, however, because VA data systems neither reflect all costs nor captured all costs in a consistent way to make accurate cost comparisons. We have reported that VA’s cost accounting system distributes costs inconsistently and is generally not reliable as a source for precisely comparing VA program costs. We have also noted that VA cost reports are not subject to audit and rely on each medical center to determine the distribution of costs among different activities. VA budget and program officials we contacted recognize that cost reports do not provide useful, reliable cost information. Because decisions on staff allocation costs and, in some cases, workload are made at the facility level, data are inconsistent among facilities. For example, VA’s cost data do not include the cost of all services provided to community nursing home patients by VA medical centers, such as radiology and laboratory services, clinical visits, and medications. In addition, VA’s costs for transporting veterans between community nursing homes and VA medical centers for treatment are generally excluded from VA’s cost data for community homes. The inconsistent distribution of costs among VA cost centers leads to both overallocation and underallocation of overhead and variable costs (such as laundry, linen, janitorial, and administrative services) to VA nursing home units. Information was not available to determine the overall effect of cost distribution inconsistencies on VA nursing home daily costs. Factors that contribute to cost differences between VA and community nursing homes include patient case mix differences and more intensive staffing patterns in VA homes than in community nursing homes. For example, VA’s Nursing Home Cost Study issued in August 1996 reported that among patients sampled as of October 1, 1995, about 16 percent of VA nursing home patients were in a heavy care category requiring special rehabilitation services (thus requiring more care and higher costs) compared with 3 percent of community nursing home patients. Conversely, more community nursing home patients were in a less resource-demanding category—22 percent for community nursing home patients compared with 17 percent for VA nursing home patients. The study also noted that VA nursing homes have an overall higher level of staffing than community nursing homes (.69 patient care personnel per resident at VA facilities compared with .58 at community nursing homes) and that facilities have different types of staff. RNs make up 36 percent of the staff at VA nursing homes but only 12 percent at community nursing homes. Aides, on the other hand, make up 67 percent of the staff at community nursing homes but only 32 percent of the staff at VA facilities. In July 1995, VA began implementing a decision support system (DSS) at 38 hospitals. Current VA plans call for the deployment of DSS to all VA hospitals by fiscal year 1998. Such support systems in the private sector have proved to be an effective management tool for improving quality and cost- effectiveness, and VA expects DSS to do the same for its health care operations. DSS can compute the cost of services provided to each patient by combining patient-based information on services provided with financial information on the costs and revenue associated with those services. VA expects DSS to provide VA managers and health care providers with variance reports identifying areas for reducing costs and improving patient outcomes and clinical processes. In a September 1995 report on the implementation of DSS, we noted that VA had not developed a business strategy for effectively using DSS as a management tool. We also noted that VA had not yet developed business goals and associated plans to guide the organization, determine the proper location and use of resources, and provide a framework for using management tools such as DSS. VA is developing business plans that should be completed by December 1996. For example, one VISN work group charged with developing the network’s long-term care business plan was directed by the VISN leadership to consider consolidating, contracting, or closing of all VA nursing homes in the service area. These options are being considered so that VA can effectively provide nursing home resources in future years to the aging veteran patient population. VA’s use of community nursing home beds is affected by (1) a shortage of beds in some parts of the country, (2) veteran and family preferences to use VA nursing homes, and (3) VA’s inability to compete with other purchasers of community nursing home services in some locations because of lower reimbursement rates. VA has several initiatives under way to improve its access to community nursing home beds by improving the competitiveness of its rates but needs better information on specific locations where rate adjustments would be appropriate. On the other hand, VA’s use of state veterans nursing homes is limited because of the number of such beds available and because VA has little control over who gets admitted to these facilities. The availability of nursing home beds and occupancy rates are critical to VA’s ability to place veterans in community nursing homes. According to a 1996 study by the Institute of Medicine (IOM), Nursing Staff in Hospitals and Nursing Homes, the demand for nursing home services continues to grow as the number of aged and chronically ill people increases. IOM reported that in most areas of the country, the demand for nursing home services has surpassed the supply of beds, especially in relation to the growth in the oldest of the elderly population. In 1990, the United States had approximately 32 million people aged 65 years or older. This number is projected to double by 2030. The number of elderly needing nursing home care is expected to triple from about 1.8 million in 1990 to about 5.3 million in 2030. The median occupancy rate for U.S. nursing facilities was about 93 percent in 1994, the most current year for which data were available. As demand for nursing home resources grows, VA’s access to community nursing home beds varies by community. VA has identified seven geographic areas where it has problems securing community nursing home beds: California, the District of Columbia, Florida, New Hampshire, New York, South Carolina, and Virginia. In other parts of the country, though, VA does not appear to have such problems. According to our questionnaire respondents, for example, the availability of community nursing home beds in Oklahoma City, Oklahoma, and Kansas City, Missouri, exceeded the number of veterans needing beds. A VA planning official in Salt Lake City, Utah, also mentioned that this service area has always had a large number of community nursing home beds available. To make informed nursing home resource management decisions, VA needs reliable demand and capacity data. The VA Inspector General and we have criticized VA for consistently undercounting available community beds and not basing its nursing home construction or expansion projects on reliable data. VA has in fact overstated its nursing home construction needs. For example, we noted in August 1995 that VA’s planned conversion of the former Orlando Naval Hospital to a nursing home and the construction of a new hospital and nursing home in Brevard County were not the most prudent and economical uses of its resources. Furthermore, we noted that VA could purchase care from community nursing homes to meet veterans’ needs more conveniently and at a lower cost. The VA Inspector General noted in 1994 that regional planners had excluded suitable and available community nursing home beds and used questionable community data in needs assessments. Regional planners indicated that they lacked staff resources to validate community resource data or reasonably establish that the data were reliable or accurate. It is not yet clear how the VISN structure will address the need to improve the reliability of community resource data on available community nursing home beds. A May 1995 VA report, Evaluation of the Enhanced Prospective Payment System (EPPS) for VA Contract Nursing Homes, states that many longer stay patients were in a VA nursing home because they or their families refused to allow their admittance to a community facility. Veterans and their families were concerned about the limited VA benefit in community homes (6 months for nonservice-connected veterans) and the depletion of assets that occurs before a veteran’s community nursing home care is converted to Medicaid. Also, many veterans tend to prefer to be housed with other veterans because community nursing homes lack the (mainly male- oriented) culture of VA or state veterans homes. VA studies suggest that VA’s reimbursement rates may be too low in some areas, and as a result veterans’ access to community nursing homes may be limited. VA has initiatives under way to enhance access to community nursing homes but needs better information to determine where reimbursement rates adversely affect veterans’ access to these homes. VA pays facilities a fixed daily rate for nursing home services. This rate is intended to cover all necessary services, both routine daily services (room, board, and nursing services) and special and ancillary services (primary and specialty physician services, diagnostic tests, and equipment). The rate is based on each state’s daily Medicaid rate for basic nursing home care, plus an additional 15 percent. VA medical centers may negotiate with community nursing homes to provide higher reimbursements for extra care cases (that is, costly special and ancillary services). Other payers, such as private insurers, Medicare, and Medicaid, generally do not reimburse community nursing homes on a daily-rate basis. The nursing home market generally reimburses on a unit-of-service basis. For example, the Medicaid program allows providers to bill for medical services, such as physician care and diagnostic tests, on a unit-of-service basis. In some communities, VA reimbursements are not competitive with other payers. Community nursing home administrators in the facilities we visited informed us that VA was not paying what was necessary to care for some veterans, particularly those patients with heavy care needs. For example, an administrator of a Salt Lake City home indicated that although VA’s contract rate is adequate for most patients, it is inadequate for patients on intravenous or feeding tubes. Another administrator in Richmond, Virginia, indicated that although the nursing home has the capacity to admit additional veterans, it would turn away veterans requiring heavy care involving high treatment costs because VA’s reimbursement is inadequate. Nursing home administrators and VA questionnaire respondents told us that veterans with behavioral problems, alcohol or drug dependencies, or conditions requiring the use of a ventilator were most likely to be refused admission to a community nursing home. Nursing home administrators said that they make trade-offs between serving veterans at potentially lower reimbursement rates and serving private pay, Medicare, and Medicaid patients whose ancillary service costs can be billed separately. The 1993 National Survey of VA Community Nursing Home Program Practices conducted by VA’s Midwest Center for Health Services and Policy Research noted that only 29 percent of VA medical centers indicated that the Medicaid plus 15 percent reimbursement rate was adequate to cover community nursing home costs in their area. VA policy allows medical centers to negotiate reimbursement rates higher than the standard Medicaid plus 15 percent rate, and VA’s May 1995 Evaluation of the Enhanced Prospective Payment System noted that 20 percent of community nursing homes were paid higher rates. Some VA medical centers do not pursue higher rates because negotiating contracts is burdensome and obtaining approval for such rates from the VA regional level sometimes takes 2 to 4 months. The study concluded that these increasingly difficult negotiations sometimes soured relations with community nursing homes. As a result of the study, VA changed its policy on community nursing home rate exceptions, allowing local VA medical center director approval, except for subacute care. In addition, VA is a small purchaser of nursing home care in most markets, providing little incentive for nursing homes to engage in lengthy negotiations. For example, in May 1995, VA’s Management Decision and Research Center noted that no veterans were placed in one-quarter to one-third of the community nursing homes with which VA had contracts during fiscal years 1988 through 1994. The remaining homes under contract during that period had between 4.6 and 6.3 veterans placed per year on average. VA is trying to improve the competitiveness of its nursing home reimbursement rates through three initiatives: (1) multistate contracting, (2) a prospective payment system based on Medicare nursing home reimbursement rates, and (3) revisions to the standard community nursing home contract format. However, VA needs better information to identify specific locations where adjustments to reimbursement rates are needed to enhance access to community nursing home beds. In September 1995, VA issued a request for proposals for multistate contracts to provide nursing home services. Multistate contracting is intended to enhance VA’s ability to access beds by easing the administrative requirements on community nursing homes and offering prospective providers a large volume of patients. VA plans to commit $34 million to these contracts or about 10 percent of the community nursing home program budget. The multistate contracts specifically guarantee access for veterans up to the amount specified in the contracts. VA awarded six multistate contracts to private corporations on September 1, 1996, and also contracted with a provider with 20 facilities in California. The new contracts will provide VA access to 1,101 nursing homes in 43 states, and VA believes the contracts offer administrative and other cost efficiencies. Each corporation will provide five levels of care based on a state-specific pricing structure designed to achieve cost savings over the life of the contract. Since 1991, VA has also been pilot testing a prospective payment system based on Medicare reimbursement rates. EPPS, implemented in 8 of VA’s 164 medical centers with nursing homes or contracts with community nursing homes, provides three levels of reimbursement—superskilled, skilled, and intermediate-level care. Ancillary costs are also included in these rates, but speech, physical, and occupational therapies are reimbursed separately using rates established by VA’s central office. A 1995 study by VA’s Management Decision and Research Center estimated that the pilot system reimbursed nursing homes $3,402 more per patient than VA’s normal reimbursement system. However, while data limitations made it inconclusive, the study suggested that these added costs were outweighed by savings to VA medical centers from moving patients from the hospital sooner to nursing homes, which provide a lower (and less expensive) level of care. VA will use the findings of the EPPS evaluation to collect information on nursing home market conditions and hospital utilization to determine whether special efforts are needed to become more competitive in community nursing home markets. VA medical centers may qualify to participate by meeting certain criteria based on cost, access, and administrative workload considerations. For example, medical centers will be allowed to participate if more than 50 percent of their community nursing home contracts require exceptions to the Medicaid plus 15 percent reimbursement rate. Other participation criteria include the inability to place more than 5 percent of patients who are considered appropriate for nursing home placement and a caseload that includes more than 50 percent of patients who need specialized care and require special negotiations before placement. In June 1995, VA changed its standard nursing home contract to provide for multiple reimbursement rates. These rates include the following categories of care: (1) reduced physical function, (2) basic, (3) heavy rehabilitation therapy, (4) special care, (5) clinically complex, (6) ventilator dependent, (7) human immunodeficiency virus/acquired immunodeficiency syndrome, and (8) Clinitron dependent. Rates are figured using the current Medicaid rate plus an amount to cover the use of additional supplies, services, and equipment associated with each category of care. Although these initiatives should improve VA’s access to community nursing home beds, VA needs reliable information on the availability of community nursing home beds and the reasons for access problems in specific locations to make informed decisions about where adjustments to reimbursement rates are warranted. Without information on the reasons for access problems in specific locations, assertions of noncompetitive VA reimbursement rates could obscure medical center preferences for using VA nursing homes. Some of the information available is anecdotal and based on testimonial rather than quantitative evidence. For example, a 1993 VA Inspector General report on the EPPS pilots noted that two sites reported that the pilot rates were too high for their area, though the pilot sites had been selected because they had reported difficulty accessing community nursing home beds. The Inspector General noted that the higher reimbursement rates did not ensure placement of “heavy care” (costly) VA patients in exchange for the higher costs associated with the pilots. VA’s access to state veterans homes is also limited. States establish admission policies, which vary from state to state. In some instances, admission criteria for state veterans homes are more restrictive than VA admission criteria. For example, some state homes require that veterans have service-connected disabilities or wartime military service. Other states allow admission of veterans’ spouses and other nonveterans. Bed availability in state homes also helps determine VA’s ability to use these facilities. For example, according to discussions we had with state nursing home admissions staff in fiscal year 1996, Massachusetts has a waiting list for skilled care bed admissions in its two state facilities; Colorado, however, has no waiting list for its four facilities and admits nonveterans to all four homes. VA and we have found differences in the quality of care provided by the various types of nursing homes. Through its monitoring efforts, VA works with homes to improve patient care. On the basis of our review of selected quality indicators, the homes we visited appeared to provide comprehensive and appropriate care to veterans. VA homes, however, generally had fewer quality-of-care issues than most of the community and state homes we saw. VA requires its medical centers to ensure that veterans receive quality care in any nursing facility in which they are placed. Specifically, on a monthly basis, VA medical centers must send an RN or social worker to visit veterans in community nursing homes to review their care and to provide a liaison between the community home and the VA medical center. An RN must visit patients at least every other month. Medical center staff also review state survey and certification data maintained for Medicare- and Medicaid-certified facilities. They also review Joint Commission on Accreditation of Healthcare Organizations (JCAHO) accreditations, when available, to assess community nursing homes’ compliance with appropriate standards. In addition, VA medical centers conduct annual on-site evaluations of community facilities using a multidisciplinary team to review patient records, policy, and procedures and to check fire and safety provisions. VA annually inspects state veterans homes to certify their eligibility for per diem payments by meeting VA standards of care. VA inspections of state nursing homes are carried out by medical center staff and are similar to annual inspections of community nursing homes. If the medical center determines that care is inappropriate, it can suspend per diem payments for veterans placed in state homes. We visited 2 VA, 10 community, and 5 state veterans nursing homes, where we reviewed the care provided to 95 veterans. The patients were randomly selected for a representative sample at the VA and state veterans homes. We reviewed the total veteran population under VA contracts at each community home but did not review the care provided to nonveteran patients in these facilities. We used the 1995 HCFA Provider Certification Survey Procedures to assess the quality of care and the overall ability of the facility to meet patient care needs. Patient care in the two VA nursing homes we visited was comprehensive and generally met Medicare certification standards. One VA nursing home had achieved 100-percent participation in patient therapies. VA nursing homes are hospital based and therefore have greater access to rehabilitative and restorative services than other nursing homes. In addition, VA nursing homes were generally staffed by a higher number of RNs, gerontological and rehabilitation specialists, social workers, and physical therapists than community and state veterans homes, and all VA nursing homes are JCAHO accredited. Although care met quality standards, we found some quality-of-care issues at all 10 community nursing homes we visited. For example, we noted that veterans in some community homes were less likely to receive ongoing restorative therapies than in VA facilities. We also noted that community nursing homes used physical and chemical restraints more often than VA homes we visited. One facility was not certified for Medicare or Medicaid, and one rural facility had only recently qualified for Medicare certification by ensuring that at least one RN staffed the facility 8 hours every day. None of the community facilities we visited was JCAHO accredited. Although medical center staff did not always comply with monthly on-site monitoring requirements because of resource limitations, they generally visited community nursing home patients when problems were identified. At one location, medical center staff told us they were reluctant to criticize community nursing homes because bed availability was at a premium and they did not want to antagonize the homes. The staff at this medical center did perform monthly monitoring visits and sought to resolve patient care problems by educating facility staff and providing patient care consultations. According to our survey respondents, VA medical centers terminated 50 contracts with community nursing homes in fiscal year 1994 because of quality-of-care problems. No placements were made in an additional 67 contract facilities because of quality-of-care concerns during the same time frame. VA’s study of EPPS also noted that some patients received insufficient medications and restorative or rehabilitative care in community nursing homes. The report cited one group of community nursing home providers who said distinct differences existed in the quality of care provided to private and Medicare patients compared with Medicaid and VA patients. Care provided in the state homes we visited generally met quality standards. One state home, however, had several quality-of-care issues. The home was not certified for Medicare or Medicaid or accredited by JCAHO. This home showed little evidence of planned daily activity and did little to protect the privacy of patients, whose care was provided in open wards without privacy curtains. We also observed heavy use of physical and chemical restraints at this facility. Although the VA medical center knew about this facility’s problems and annual visits had detected long-standing problems, the medical center’s infrequent attention to the facility’s quality-of-care problems was not sufficient to effect corrective measures. The uneven distribution of nursing homes and differences in the extent to which VA reimbursement rates are competitive in local markets could reasonably lead to different responses among VA networks to meet the demand for cost-effective, high-quality nursing home services. However, without (1) accurate and complete information on nursing home costs, (2) better information on the availability of community nursing home beds, and (3) information on the competitiveness of VA reimbursement rates, VA has inadequate assurance that it is using the nursing home resources at its disposal to the best of its ability to serve veterans in need of such care. As VA implements the VISN structure, decisionmakers will need better cost- and care-based information on the nursing home services it provides or purchases. VA’s implementation of multistate contracts and efforts to improve the competitiveness of its reimbursement rates should improve its access to community nursing home beds. VA’s efforts to more accurately identify and report nursing home costs through DSS are incomplete. Also, VA needs better information on the availability of community nursing home beds and must identify locations where current rates are not competitive, especially in areas not covered by multistate contracts. As part of VA’s ongoing efforts to improve nursing home resource management decisions, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to more accurately accumulate and report nursing home costs, assess the availability of community nursing home resources, and identify locations where current reimbursement rates are not competitive. On November 25, 1996, we met with the Assistant Chief Medical Director for Geriatrics and Extended Care and other VA officials to obtain their comments on a draft of this report. The VA officials stated that the report complements VA’s efforts to review options for providing long-term care for veterans and concurred with our findings and recommendations. VA is currently rethinking its nursing home patient distribution goals for the three types of facilities and is also considering greater emphasis on alternatives to long-term care for veterans such as home and community health care, day care, and other noninstitutional care options. To this end, VA has established an Advisory Committee on the Future of VA Long-Term Care, which will make recommendations to VA’s Under Secretary for Health on the scope and structure of VHA’s long-term care services and the changes necessary to ensure that services for veterans are available and effective in future health care settings. Committee members will be selected on the basis of professional expertise in various components of long-term care and will represent constituencies such as veterans service organizations, nursing home corporations, and university-based academic communities. VA expects recommendations from the committee in 1-1/2 to 2 years. VA agreed that its Cost Distribution Report is inadequate, and although better information on costs and local resources is available from data collected in conjunction with the new multistate contracts, VA still expects that full implementation of DSS will improve data on costs and patient outcomes. In addition, VISN networks have been provided a new population-based, long-term care planning model that is being used to develop network business plans. VA noted that it has initiated efforts to improve collection of data on community nursing home patients to compare their characteristics (including case mix) with VA nursing home patients. A Patient Assessment Instrument, now used for VA nursing home patients, will be applied to community nursing home patients and is currently used for patients referred to multistate contract facilities. VA also offered several technical comments and clarifications on our draft report that we incorporated into the final report as appropriate. Copies of this report are being sent to the Secretary of Veterans Affairs, other congressional committees, and interested parties. Copies will be made available to others upon request. Please call me at (202) 512-7101 if you have any questions or need additional assistance. Other GAO contacts and staff acknowledgments to this report are listed in appendix II. Gina Guarascio, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Veterans Affairs' (VA) nursing home programs, focusing on: (1) the distribution of veterans in VA, community, and state veterans nursing homes; (2) the costs to VA for VA, community, and state veterans nursing homes; (3) the factors affecting VA's use of community and state veterans nursing homes; and (4) whether VA, community, and state veterans homes provide comparable quality care. GAO found that: (1) the number of veterans receiving VA-financed or -provided nursing home care increased from 72,889 in 1985 to 79,373 in 1995, though the costs of these services increased from about $710 million to $1.6 billion in the same period; (2) among veterans currently receiving VA-financed or -provided nursing home care, 40 percent receive such care in VA nursing homes, 36 percent in state veterans nursing homes, and 24 percent in community nursing homes; (3) VA records for fiscal year 1995 indicate that VA's daily per patient cost was $213.17 for veterans in VA nursing homes, $118.12 for veterans in community nursing homes, and $35.37 for veterans in state veterans homes; (4) some of the cost differences are attributable to differing patient mix and staffing patterns among the facility types, but the precise cost differences cannot be determined because of weaknesses in VA's cost data; (5) several factors influence VA decisions on where to place nursing home patients; (6) VA's use of state veterans homes is limited by the number of such beds available and by some states' criteria for admitting veterans to these homes; (7) the VA nursing homes GAO visited appeared to provide more comprehensive care to veterans than most of the community and state veterans nursing homes GAO visited; (8) although the care provided in the community and state homes GAO visited generally met quality standards, GAO identified quality-of-care issues at both types of homes; and (9) although VA has initiated efforts to improve its data on the cost of providing and purchasing nursing home care, the availability of nursing home beds in local markets, and the adequacy of VA reimbursement rates to purchase quality nursing home care for veterans, better information is still needed for VA to make informed resource management decisions.
DOE has more than 50 major sites in 35 states where the department carries out its varied missions, including developing, maintaining, and securing the nation’s nuclear weapons capability, cleaning up the nuclear and hazardous wastes resulting from long-term weapons production, and conducting basic energy and scientific research and development. This work is overseen primarily by DOE’s largest program offices—the NNSA, the Office of Environmental Management, and the Office of Science—and is primarily carried out though facility management contracts. DOE has a workforce of 16,000 federal employees; the department relies on the more than 100,000 employees of its contractors to manage its facilities and achieve its missions. DOE’s contracts with small businesses occur in several different ways. First, small businesses receive direct contracts from a portion of DOE’s procurement outlays that are not awarded as facility management contracts. Second, small businesses may compete for and receive facility management contracts. Historically, small businesses have not performed these contracts, though in a few cases small businesses have won such contracts after DOE identified ways to limit the contract’s scope of work. Third, small businesses receive subcontracts from DOE’s prime contractors. In 2004, approximately 17.5 percent of facility management contract dollars went to small businesses as subcontracts. Subcontracts, however, do not count toward achieving DOE’s small business prime contracting goal. Advocacy responsibilities for small business contracting rest primarily with a small business office—usually called the Office of Small and Disadvantaged Business Utilization—at each executive-branch agency. In general, officials in these small business offices are responsible for negotiating an annual small business prime contracting goal with the Small Business Administration (SBA), establishing each agency’s small business policy and guidance, coordinating agencies’ small business outreach efforts, and monitoring small business performance with respect to the goal. Within DOE, the department’s Office of Procurement and Assistance Management and NNSA’s Office of Acquisition and Supply Management also play important supporting roles in promoting small business prime contracting. These offices establish overall department procurement policy and prepare more specific guidance to reflect contracting requirements consistent with federal acquisition regulations. These procurement offices also maintain data on DOE’s prime contracts, including annual obligations to small businesses, and work with the Small Business Office staff to monitor small business performance and implement small business policies. DOE’s program offices are responsible for identifying small business prime contracting opportunities and providing contracting oversight. The SBA calculates DOE’s annual small business prime contracting achievement using data from the Federal Procurement Data System–Next Generation, a governmentwide procurement database that is administered by the General Services Administration. DOE’s efforts to increase its prime contracts with small businesses have increased the department’s total expenditures on small business prime contracts since 2001. However, the increases were not sufficient to achieve the department’s small business prime contracting goal in 4 of the past 5 years. DOE’s approach to increasing its prime contracting with small businesses focused on three main areas: (1) identifying more contracting opportunities for small businesses, (2) expanding small business development and outreach activities to create a larger pool of qualified small businesses, and (3) improving program management and oversight. DOE’s effort to increase the opportunities for small businesses to win contracts with the department included restructuring or “breaking out” portions of projects historically conducted by the department’s facility management contractors and redirecting that work to small businesses, modifying procurement strategies to expand opportunities for small businesses, and continuing to emphasize the award of nonfacility management contracts to small businesses. To redirect portions of projects traditionally performed under facility management contracts, DOE’s Office of Environmental Management identified nine such projects that it believed could be reserved for competition among small businesses as prime contracts. These prime contracts, summarized in table 1, are collectively worth about $745 million, and many involve multiyear environmental cleanup, construction, and facility operations activities that are essential to the mission of this office. As of September 2005, the Office of Environmental Management had successfully awarded seven of these contracts and obligated about $266 million. In December 2005, this office awarded an eighth small business contract, the second at Paducah, Kentucky. In December 2005, the Office of Environmental Management cancelled the ninth contract, which involved the Fast Flux Test Facility procurement at Hanford, Washington, due to budgetary constraints and the need to focus resources on more important projects. In comparison with the Office of Environmental Management, NNSA and the Office of Science identified relatively fewer procurements that could be redirected from their facility management laboratory contracts and set aside for small businesses. NNSA and Office of Science procurement officials said that redirecting work from facility management contracts to small businesses was not a priority in their programs because removing mission-related work from the contractors managing the laboratories could diminish the department’s ability to ensure the work at these laboratories is effectively accomplished. However, these officials were trying to identify mission-related work that could be moved from facility management contracts to small business contracts when a compelling “business case” could be made for doing so. For example, in 2004 NNSA awarded a contract to a small business for design, construction, and integration services for radiation sensors. These services were previously provided under the Sandia National Laboratories facility management contract. This award has an overall value of $80 million, of which $71.5 million had been obligated through the end of fiscal year 2005. In addition, NNSA and the Office of Science recently collaborated with the Office of Environmental Management to develop a small business information technology contract to provide services at the three DOE facilities in Oak Ridge, Tennessee. DOE plans to award this approximately $130 million contract in fiscal year 2006. These information technology services are currently being performed by three large businesses as subcontractors to the facility management contractors at the Oak Ridge sites. Science is also considering breaking out select small business subcontracts from the Thomas Jefferson National Accelerator Facility contract and awarding these as prime contracts. To further expand opportunities for small businesses, DOE also modified procurement strategies. Its two efforts in this area were led by the Office of Environmental Management and NNSA. These two efforts were as follows: In 2004, the Office of Environmental Management established “indefinite delivery/indefinite quantity” contracts in which it preapproved 8 large and 14 small businesses to provide services on environmental cleanup and deactivation, demolition, and removal of facilities on an as-needed basis. These contracts authorize a maximum of $800 million in total contract task orders through 2009. Through fiscal year 2005, almost $24 million in task orders had been issued, all of which went to small businesses. Officials cited a variety of possible reasons for the limited use of this contract mechanism so far, including contract officers’ unfamiliarity with implementation procedures and security issues surrounding administration of the contracts. An NNSA initiative—known as the “tri-lab initiative”—involved combining procurements common to the Sandia, Lawrence Livermore, and Los Alamos national laboratories, such as facility security, maintenance services and computer hardware, and purchasing of goods and services from small business suppliers. According to NNSA, most of the targeted procurements were for support services, such as temporary staff, rather than direct mission laboratory work. The three laboratories initially estimated that as much as $187 million in procurements could be redirected to small businesses in fiscal year 2005, with a goal of $300 million by 2007. However, before NNSA could fully take advantage of any potential benefits, the initiative was canceled. According to NNSA officials, the cancellation was primarily due to congressional concerns regarding the potential impact on the department’s ability to ensure that the laboratories’ projects are effectively accomplished, as well as concerns that some small businesses could lose existing laboratory subcontracts if NNSA awarded the work as new prime contracts through the tri-lab initiative. Finally, DOE’s efforts to expand contracting opportunities included continuing to emphasize the importance of directing nonfacility management contracts to small businesses. DOE procurement officials said that nonfacility management contracts issued by the department are reserved for small businesses whenever possible. Exceptions would involve situations in which only a large business supplier was available, such as for utilities. Between fiscal years 2001 and 2004, the percentage of these contracts awarded to small businesses increased from 48.7 percent (753 out of 1,546 contracts) to 53.0 percent (831 out of 1,569 contracts). In fiscal year 2004, small businesses received 61 percent ($282 million) of the total dollars spent on these new nonfacility management contracts. DOE has attempted to increase its use of small businesses by increasing the pool of small businesses willing and able to provide goods and services to the department. This effort has involved a variety of business development and outreach activities, including establishing a small business advisory team with participation by small businesses, pairing large and small businesses as mentors and protégés to assist in developing small businesses’ capabilities, and periodically hosting small business conferences to discuss upcoming contracting opportunities and to help small businesses understand the intricacies of the federal procurement process. DOE contends that many of these activities will help to expand both prime contracts and subcontracts with small businesses. The extent to which these efforts contribute to DOE’s small business prime contracting achievements is generally unclear. For example, regarding the pairing of small and large businesses in DOE’s mentor- protégé program, since 2002 the number of business pairs has increased from 5 to 48, but only one participating small business has subsequently received a prime contract from DOE. Similarly, while attendance at DOE’s annual small business conference has increased since 2004, and the feedback DOE collects from participants is generally positive, DOE does not know the extent to which the conferences have led to small businesses receiving prime contracts. In addition to these DOE-wide efforts managed by the Small Business Office, the Offices of Environmental Management and Science, as well as NNSA also pursue small business development and outreach activities, including attending small business conferences, managing a mentor- protégé program, and providing other training support for small businesses. For example, NNSA conducts biweekly outreach to small businesses by inviting interested small businesses to meet with NNSA procurement officials to discuss their capabilities and learn of upcoming procurement opportunities. NNSA has held 318 of these sessions between March 2002 and January 2006. Nine small businesses attending one or more of the sessions have obtained 25 DOE prime contracts worth about $66 million. Also, Sandia National Laboratory, which primarily conducts work for NNSA, runs its own mentor-protégé program that intends to build the long-term capacity of small businesses. To strengthen DOE’s management and oversight of its small business program, the Small Business Office took the following main actions: In 2002, it began quarterly reviews of the prime contracting achievements of each program office. These reviews are intended to track the department’s progress toward meeting its overall small business goal and to communicate results to program office officials and the Secretary of Energy. In 2004, it attempted to improve coordination among the department’s small business, procurement, and program offices by establishing the Advance Planning Acquisition Team. The team includes officials from these DOE offices and intends to collectively review upcoming procurements, identify potential small business contracting opportunities, and exchange information on promising practices for improving DOE’s small business achievements. In 2004, it began to review all contracts over $3 million not reserved for small businesses to determine if small business opportunities had been adequately considered. The department, however, does not collect data that indicate the extent to which these efforts have directly or indirectly affected DOE’s small business prime contracting achievement. Therefore, the impact of these efforts is unclear. Despite DOE’s efforts to strengthen its small business program, the department has not achieved its small business prime contracting goal in 4 of the past 5 years. Table 2 shows that DOE’s procurement dollars awarded to small businesses as prime contracts increased steadily during the 5-year period, and the amount nearly doubled between fiscal years 2001 and 2005. However, except for fiscal year 2003, DOE failed to meet its annual small business prime contracting goal, falling short by more than $120 million each year. To some degree, DOE’s small business prime contracting achievements have been limited because the department’s small business prime contracting achievements occurred primarily from procurements that were directed to nonfacility management contracts, which have been a declining portion of DOE’s overall procurement outlays. Between fiscal years 2001 and 2005, more than 98 percent of the approximately $4 billion DOE obligated to prime contracts with small businesses went to nonfacility management contracts. Only since fiscal year 2005 did DOE direct facility management contracts to small businesses—$41.8 million for infrastructure and remediation activities at Portsmouth and $6.4 million for infrastructure activities at Paducah. At the time of our review, DOE had obligated $2.7 million to the Paducah remediation facility management contract, which was awarded in December 2005. During this time period, DOE’s facility management contracts were taking an ever- increasing share of total procurement dollars. In 2001, DOE obligated about $14.8 billion to its facility management contracts, which was about 80 percent of total procurements that year. In 2005, DOE obligated $19.8 billion to its facility management contracts, which was about 87 percent of the department’s total procurement outlays. In analyzing DOE’s inability to meet its small business prime contracting achievement goal, it is also worth noting that the program offices we reviewed—the Offices of Environmental Management, and Science, and NNSA—accounted for slightly less than half of the approximately $948 million in small business prime contracting achievements in fiscal year 2005 despite having responsibility for almost 90 percent of DOE’s total procurement dollars. According to preliminary data from DOE’s Small Business Office, in 2005 the Office of Environmental Management directed about $198 million, or 2.9 percent of its procurement dollars to small businesses; NNSA directed $237 million, or 2.6 percent; and the Office of Science directed about $30 million, or 1.1 percent. In contrast, more than half of DOE’s fiscal year 2005 contracting with small businesses originated from the DOE offices that are collectively responsible for only 10 percent of the department’s total procurement dollars. These offices include, for example, the Office of Fossil Energy and Office of Energy Efficiency and Renewable Energy. These smaller program offices within DOE directed about $469 million, or 21 percent, of their procurement dollars to small business prime contracts, slightly more than the combined small business prime contracts at the Offices of Environmental Management, and Science and NNSA. However, DOE’s other program offices are responsible for substantially fewer multimillion dollar facility management contracts than the Offices of Environmental Management, and Science and NNSA. DOE faces two management challenges in further improving its small business prime contracting performance. First, although DOE negotiates an annual small business prime contracting goal, it has not identified concrete steps—referred to as program objectives—that are expected to contribute in a specific measurable way to achieving its goal. Second, it does not use performance information to evaluate and improve program performance. Both types of practices are commonly associated with high- performing organizations and are consistent with principles contained in the Government Performance and Results Act. More specific information on these model practices and DOE’s comparative practices can be found in table 3. Although DOE negotiates a small business prime contracting goal each year—5.5 percent in fiscal year 2005—DOE has not specified the concrete steps the department will take to achieve its small business prime contracting goal or how progress toward that goal can be measured. In a literature review of the practices of high-performing organizations, we found that such organizations often define specific program objectives indicating how the organization intends to achieve its goals. These objectives are measurable and are focused on the specific results an organization wishes to achieve. Well-defined objectives help an organization gauge its progress in achieving its programmatic goal. While DOE’s small business strategic plan identifies a number of activities—which the department refers to as “objectives”— that the department believes will contribute toward achieving its goal, these objectives do not lay out what it specifically expects to accomplish from each activity or establish a way to measure if these activities are indeed advancing it toward its goal. Among the “objectives” the department has identified are maintaining a procurement forecast, conducting a small business breakout study of facility management contracts, and conducting a mentor-protégé program. DOE has not indicated how these actions are intended to contribute to achieving its small business prime contracting goal, and thus how the success of these actions should be measured. For example, for the mentor-protégé program, the department has not laid out the number or value of small business prime contracts it expects to result from the program. In addition, the department has not established program objectives for those key departmental efforts holding the greatest promise to increase the department’s small business prime contracting. For example, DOE currently sets aside prime contracts for small businesses from two categories of procurements: nonfacility management contracts and facility management contracts. Yet, the department has not specified the extent to which each of these types of procurements should contribute towards achievement of DOE’s small business prime contracting goal or how the success of DOE’s efforts to set aside small business contracts in these areas might be assessed. For example, 61 percent of obligations to new nonfacility management contracts went to small businesses in 2004. However, only 28 percent of the total obligations for nonfacility management contracts went to small businesses. Whether or not DOE has been successful in reserving such contracts for small businesses and whether the department’s efforts should be further improved are unclear. Defining an appropriate objective regarding the small business awards for nonfacility management contracts could increase DOE’s understanding of its efforts in this area and provide a much clearer means of assessing its progress. Similarly, the department has not established an objective for the extent to which DOE might create small business opportunities from facility management contracts. As discussed earlier, the proportion of procurements associated with facility management contracts has been increasing. Generally, the department has begun to recognize that to further increase its small business prime contracting performance over the long term, it may be necessary to increase the small business opportunities redirected from facility management contracts. Furthermore, Congress recently required DOE to study the feasibility of changes to facility management contracts so that additional small business prime contracting opportunities might be provided. To date, the Office of Environmental Management has led the department’s effort in redirecting such contracts to small businesses by setting aside a number of small business opportunities culled from facility management contracts. Other program offices have done so to a lesser degree or have plans to do so in the future but have not yet awarded those contracts. As DOE continues to consider what small business opportunities it is reasonably able to draw from facility management contracts, establishing a program objective would encourage the department to maximize these opportunities. DOE does not systematically analyze the performance of its small business program to determine the effectiveness of specific activities or policies in advancing its goal, although the department does collect some performance information. DOE’s main effort to collect and analyze performance information consists of tracking the proportion of total procurement dollars going to small businesses as prime and subcontracts at both the agencywide level and the program office level. The prime contracting data are tracked quarterly. Additionally, the Small Business Office collects information on a few of its small business development and support activities, such as attendance at and participants’ views on the quality of small business conferences. Also, DOE’s procurement office collects and tracks aggregate data on the extent to which facility management and nonfacility management contracts have been awarded to small businesses and whether such contracts involve multiyear agreements. DOE does not, however, collect sufficient information to provide the department with insight on whether or how specific policies or processes should be changed to further increase small business prime contracting. For example, as noted earlier, DOE’s policy is to set aside for small businesses all new nonfacility management contracts to the extent possible; in fiscal year 2004, about half of all new nonfacility management contracts were awarded to small businesses and half were not. However, the department does not require procurement staff to report the reasons that contracts have not been awarded to small businesses. The Federal Procurement Data System allows agencies to report several reasons for nonawards that include the following: no known small business source; a small business was solicited but did not make an offer; and a known small business source existed, but no offer was solicited. Such information might help DOE assess if staff are having difficulty identifying small businesses capable of performing DOE contracts, if small businesses are having problems preparing offers, or if time constraints are limiting staff efforts to solicit offers from small businesses, for example. A senior procurement analyst with DOE’s Office of Procurement and Assistance Management said the department does not collect this type of information because it has not viewed it as useful. However, high-performing organizations regularly collect and analyze performance data to develop information on the effectiveness of their efforts—determining whether or not these efforts are achieving the desired results. In addition, many high-performing organizations also conduct periodic program reviews or audits to identify systemic problems and determine what adjustments to policy or practice should be made to improve performance over the long run. While the department does conduct some limited assessments of the small business efforts carried out by the individual program offices, these assessments do not routinely result in documented plans for improved long-term performance. For example, DOE’s procurement office requires program offices to periodically review compliance with overall federal acquisition regulations, including specific requirements regarding small business prime contracting. But according to the manager of this effort, these limited compliance reviews have not identified problems with any program office’s small business practices that have required corrective action. In addition, DOE’s Small Business Office conducts an informal assessment of a program office’s efforts to meet its small business prime contracting goal when that office does not achieve its goal. However, the Small Business Office officials said they tend to focus on near-term obstacles that the program office faces in awarding small business contracts rather than on systematically evaluating the program office’s management of its small business prime contracting efforts. Problems identified often include such things as bid protests and delayed appropriations that impede the award of anticipated small business contracts. Similarly, any planned corrective action tends to focus on a program office’s ability to identify additional contracting opportunities for small businesses in the near-term, rather than on programmatic changes that might improve performance over the long- term. Furthermore, because the assessment is informal, it is not documented, leaving the Small Business Office without a written plan that can be used to hold a program office accountable for achieving results. Thus, when the department’s three largest program offices appeared unable to meet their small business prime contracting goals in fiscal year 2005—leaving DOE short of its 5.5 percent small business goal—the department had little information to fully account for the shortfall or systematically determine a course of possible corrective action to make program improvements. The small business management practices employed by federal agencies having a mission or agency component with a mission similar to DOE’s provide examples of strategies for small business program evaluation and continuous improvement. Unlike at DOE, each agency is required to use performance information gained from periodic program reviews to help identify and direct program improvement efforts. We did not assess the specific small business practices employed by these agencies to determine if they were effective or the impact these practices may have had upon their small business prime contracting. However, over time, these agencies and the components we visited have awarded a greater share of their procurement dollars to small businesses and more often met their prime contracting goals than has DOE. In general, the small business programs at the three agencies we visited carry out activities similar to those conducted by DOE. As with all federal agencies, procurements are subject to the requirements of the Small Business Act and the Federal Acquisition Regulation, which together require small business contracting procedures and set-asides meant to ensure small businesses are afforded maximum practicable opportunities for federal prime contracts. As such, each small business office has a role in articulating and establishing agency small business contracting policy, as well as conducting small business outreach and development activities. As at DOE, these agencies, their components that we visited, or both, also promote the use of small businesses among agency contracting and procurement staff, train agency staff on federal and agency small business contracting requirements, maintain databases of potential small business vendors, and track and report small business accomplishments, periodically reporting results to senior agency officials. In contrast to DOE, however, NASA, the Department of the Army, and the Department of Health and Human Services use formal program evaluation to guide programmatic changes. In conjunction with the Office of Procurement, the NASA small business office performs a comprehensive review of NASA’s procurement offices on a rotating three-year basis. The purpose of the audit is to ensure that the small business program is being appropriately implemented and to bring to management’s attention issues that hinder progress toward agency goals. This review also examines how well the small business goals are being met, what the major impediments to small business utilization are, and what steps can be taken to improve small business usage within the procurement center. A recent review of NASA’s Kennedy Space Center found that occasionally small business specialists found it difficult to balance their procurement and small business work duties and recommended some organizational changes that would help staff surmount such difficulties. Another recent review of the Goddard Space Flight Center identified a strained relationship between the SBA’s procurement center representative, who assesses NASA procurements for small business opportunities, and procurement center staff. The review recommended mediation from the Small Business Administration regional office to improve that relationship. In addition, the Goddard review also included a recommendation to further reinforce the importance of small businesses to NASA by having newly-hired staff and some others participate in an internal training course sponsored by the NASA small business office. According to NASA small business officials, this course is intended to show that the use of innovative and technically competent small businesses make business sense for NASA, regardless of the particular small business contracting goal, and can contribute substantially to the agency’s ability to achieve its scientific mission. The Director of NASA’s small business office said that one of the main functions of his office staff is ensuring that program and procurement officials throughout the agency understand the business and mission benefits of contracting with small businesses. Similarly, the Department of the Army is subject to an annual small business program review by the Department of Defense (DOD) small business office, in which the Army must be evaluated on a number of factors that are both qualitative and quantitative, such as whether the organization has been able to meet its internal prime contracting goal, whether its performance has improved over the prior year, and the quality of organization’s small business improvement plan. If DOD officials are unsatisfied with the performance rating earned by the Army, the head of the organization may be required to implement a performance improvement plan, which must be reviewed with senior department officials in the Office of the Under Secretary for Acquisition, Technology, and Logistics. Army officials said they believe that elevating these concerns to a higher department level helps ensure problems are corrected. In addition, the Army is required to outline a minimum of three program improvement initiatives it intends to pursue during the year. These initiatives must include implementation milestones and criteria for assessing whether the initiatives have been successful. For example, in 2004, the Army identified an initiative to establish the policies and procedures of the Army-managed mentor-protégé program. This initiative ultimately led to a number of policy changes, an independent Army mentor-protégé program, and a workshop to help small businesses understand how to develop a formal mentor-protégé agreement with a DOD prime contractor. Although the Army requires its subordinate organizations to report small business prime contracting progress annually, which is substantially less often than such performance is tracked at DOE, the Army evaluates the small business program performance of each subordinate Army organization, including the U.S. Army Corps of Engineers, by conducting comprehensive performance audits at least once every 3 years. These comprehensive audits of Army small business programs are intended to generate a program improvement plan to address any deficiencies identified during the audit. Deficiencies that require corrective action must be reported to the Secretary of the Army. Army officials said they believe that yearly performance reviews and a periodic comprehensive evaluation together help ensure that needed programmatic changes are implemented. The Department of Health and Human Services also employs program evaluations with the aim of producing small business program improvements. Health and Human Services is in the initial stage of using the agencywide balanced scorecard, an evaluation framework Health and Human Services uses to assess its performance in a number of diverse areas, to periodically evaluate its small business program. Part of this effort includes surveying a wide range of small business program stakeholders—agency employees, customers, and vendors—to determine both the quality and quantity of services provided not only by Health and Human Services’ small business office, but also by the small business specialists assigned to operational divisions, such as the Centers for Disease Control and Prevention. To determine the effectiveness of the agency’s small business program, these surveys target questions for the different stakeholders. For instance, employees of the operational divisions are asked how well small business specialists are supporting their procurement efforts, whether they understand small business contracting requirements, and how much the small business specialists are involved in advanced acquisition planning, among other things. The agency also surveys potential small business contractors on the quality of support they receive from small business specialists, in particular whether staff has sufficiently explained how to do business with Health and Human Services, whether they have been adequately informed of contracting opportunities, and whether they have received help resolving problems with solicitation issues. Health and Human Services intends to aggregate results and provide a descriptive statistical analysis, using the data to drive the small business program’s continuous improvement efforts. For example, the results are intended to help determine if additional program oversight, such as program audits or on-site monitoring, is necessary. Health and Human Services intends for this evaluation to be conducted once every 2 years. Each operational division, including the Centers for Disease Control, must subsequently prepare a summary report that identifies needed changes, a plan to address any performance areas requiring improvement, and target dates for completing improvements. This report must be submitted to Health and Human Services’ management and appropriate agency stakeholders. DOE has made progress since 2001 not only in increasing the total dollars it awards to small businesses, but also in increasing the share of its procurement dollars awarded to such businesses. Nonetheless, it has been able to achieve its annual small business prime contracting goal just once in the last 5 years. DOE’s performance as well as its future potential in this area is clearly constrained by the department’s traditional reliance on a limited group of large firms and universities to manage high-cost projects in which public safety and national security are important concerns. Moreover, the share of the department’s procurement dollars going to such projects is increasing, whereas the share going to projects more commonly performed by small businesses is declining. These circumstances create hurdles for DOE that other federal agencies do not face. In spite of these circumstances, DOE has made substantial efforts to improve its small business prime contracting performance. However, the department has not accompanied these efforts with a clear understanding of how they affect prime contracting performance, which efforts are working well and which are not, and what changes might be made to improve the effectiveness of these efforts. If DOE can combine its small business improvement efforts with a clear strategy for achieving its annual goal, with performance information that indicates its efforts are effective, and with program evaluations that help to identify problems and lead the department to address them, DOE can more credibly demonstrate that— even if it continues to fall short of its prime contracting goal—-it has done all it possibly can to give small businesses the maximum practicable opportunity to contract with the department. To improve DOE’s management of its small business prime contracting program and to help ensure that small businesses receive the maximum practicable opportunity for DOE prime contracts, we recommend the Secretary of Energy direct the Office of Small and Disadvantaged Business Utilization, the Office of Procurement and Assistance Management, and the NNSA Office of Acquisition and Supply Management to jointly establish a systematic, organized, and disciplined approach to achieving the department’s small business goal. Such an approach should include the following three steps: Define small business program objectives that collectively identify the steps or approach DOE intends to take to reach its annual prime contracting goal. These objectives should focus on the specific results the department intends to achieve, should clearly contribute directly to DOE’s prime contracting performance, and should be measurable so that progress can be determined. Identify, collect, and analyze performance information that will allow the department to determine whether the small business program activities it carries out are achieving the desired results. Periodically conduct a comprehensive evaluation of the department’s and program offices’ small business programs to determine if changes are needed and use these assessments to guide improvement efforts. We provided a draft of this report to the Department of Energy for review and comment. In written comments, the Director of the Office of Small and Disadvantaged Business Utilization stated that DOE concurred with findings and recommendations and was taking steps to further improve its small business efforts. For example, DOE stated it would take further steps to identify the approach it will take to reach its annual prime contracting goal, to better assess the effectiveness of its existing small business efforts and identify areas of improvement, and conduct periodic evaluations of the department’s small business programs. However, DOE expressed concern that we did not fully appreciate the department’s management and operating contract business model, especially in making comparisons to the small business programs at other federal agencies. The report does recognize that the use of large facility management contractors to perform much of DOE’s work has constrained the department’s ability to contract with small businesses. The report also recognizes that other federal agencies do not face a similar constraint. We believe the comparisons we made between DOE and other agencies are appropriate because we compared key management practices of each agency’s small business program, which are not dependent on the particular business model used to accomplish the agency’s mission. Furthermore, in selecting the three federal agencies to contrast with DOE’s small business program, our intent was to provide information on specific small business practices that differed somewhat from DOE’s own practices and that might provide examples for DOE as it continues its improvement efforts. DOE also provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Energy and the Administrator of the NNSA. We will also make copies available to others upon request. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) provide an update on the Department of Energy’s (DOE) key efforts to increase small business prime contracting opportunities and the results of these efforts to date and (2) identify the management challenges DOE faces in improving its small business prime contracting performance. In addition to these objectives, we are providing information on the management of small business programs by other federal agencies that either share certain characteristics with DOE’s largest program offices or that have components that share certain characteristics with these offices. To conduct our work, we interviewed DOE headquarters and program office officials,as well as representatives from the Small Business Administration (SBA), and collected and analyzed data from federal and DOE procurement databases. We also interviewed officials from the small business offices at the Department of Defense, the Department of the Army, the Department of Health and Human Services, and the National Aeronautics and Space Administration (NASA). To gain an understanding of DOE’s key small business management efforts and results to date, we obtained and reviewed DOE policy guidance and management directives concerning small business prime contracting, including internal memoranda, small business and procurement guidance, agencywide and program-specific small business plans, budget documents, and other related documents. To further gain an understanding of the DOE Small Business Office and Office of Procurement and Assistance Management roles and responsibilities with regard to small business prime contracting, we reviewed the Federal Acquisition Regulation, as well as the DOE-specific supplement to these regulations. We also interviewed DOE officials in the Small Business Office and small business officials from the department’s program offices, as well as officials in the Office of Procurement and Assistance Management and their counterparts at the National Nuclear Security Administration (NNSA). We also interviewed officials at SBA to learn about federal small business policy, how small business goals are established with the federal agencies, how they ensure the federal government meets these goals, and what role they have in overseeing small business activities nationwide. To determine DOE’s small business prime contracting achievement for fiscal years 2001 through 2004, we reviewed SBA’s federal goals report. To determine DOE’s small business prime contracting achievement for fiscal year 2005, and to determine expenditures to date toward specific contracts, we relied on the Federal Procurement Data System—Next Generation (FPDS-NG), the federal government’s repository of information regarding the nature and value of federal procurement actions. This database contains detailed information on government contract actions over $25,000 and summary data for procurements of less than $25,000. We have previously issued reports critical of the reliability, accuracy, and completeness of FPDS-NG data and the data of its predecessor system and remain concerned about some aspects of the data system. However, based on the following measures, we determined that the data of interest were sufficiently reliable for the purposes of this report. We interviewed DOE officials and officials from the General Services Administration, the agency responsible for the FPDS-NG system, to determine the steps taken to ensure accuracy and completeness of procurement data in FPDS-NG. In addition to FPDS-NG, we also used data taken from a DOE-specific procurement database, called the Procurement and Assistance Data System and maintained by DOE’s Office of Procurement and Assistance Management, to determine fiscal year 2005 prime contracting achievements for DOE’s largest program offices: the Office of Environmental Management, the Office of Science, and NNSA. As appropriate, we converted dollar values to constant 2005 dollars using Gross Domestic Product price indices from the Bureau of Economic Analysis. To identify the main challenges DOE faces in improving its small business prime contracting performance, we reviewed the management practices of DOE’s small business program and compared these against established management principles identified in select literature from leading organizations on effective management practices at federal agencies. For example, we reviewed guidance published by the Governmental Accounting Standards Board, the National Academy of Public Administration, and the IBM Center for the Business of Government, regarding effective management practices. We also reviewed prior GAO reports concerning managing for results, as well as the Government Performance and Results Act of 1993. We also reviewed program guidance, management directives, including DOE internal guidance entitled Managing Critical Management Improvement Initiatives, and program performance plans to determine the current practices supporting the DOE small business program. To gain a further understanding of DOE’s small business management efforts, we also interviewed DOE Small Business Office officials and staff, and DOE Procurement Office officials. To provide information on how other federal agencies address small business program management challenges, we obtained information on the practices of the small business offices of three other agencies that either share certain characteristics with DOE or have component organizations that share characteristics with DOE: the National Aeronautics and Space Administration, the Department of the Army (U.S. Army Corps of Engineers), and the Department of Health and Human Services (Centers for Disease Control and Prevention). We selected these agencies because each has annual procurement activity on a scale as large as or larger than the three DOE offices examined in this study and each agency has been able to award a larger share of its procurement dollars to small business than has DOE. Additionally, like DOE, these agencies must consider public safety and national security concerns to execute sound procurement decisions. See table 4 for a comparison of each agency’s 2004 total procurements and small business prime contracting achievement information. Finally, these agencies were chosen to reflect the complexity of the varied missions at DOE. Both the U.S. Army Corps of Engineers and DOE’s Office of Environmental Management have environmental cleanup as a key component of their missions; scientific research conducted by the Centers for Disease Control and Prevention is similar in scope and complexity to that conducted by DOE’s Office of Science; and both the NNSA and NASA have highly technical and complex missions. To provide information on the small business program management and oversight at these agencies, we obtained and reviewed documentation of agency-specific procurement regulations, small business and procurement policy guidance, management memoranda, small business strategic plans, budget documents, and other related documentation. In addition, we interviewed cognizant officials at each agency concerning this information. We did not assess the specific small business practices of these agencies to determine if they were effective. Instead, we obtained information on small business program activities to identify practices that somewhat differed from DOE’s practices and that could serve as examples DOE might want to consider as it further seeks to improve its prime contracting performance. Although the examples we highlight in this report are consistent with established management principles, we did not determine if these practices, as implemented by the federal agencies we visited, have had a direct impact on their small business prime contracting. The Department of Defense maintains a small business office for each major component of the agency, and the operations of each are overseen by the Department of Defense’s small business office. Therefore, we interviewed small business officials in the small business offices at the Department of Defense, the Department of the Army, and the U.S. Army Corps of Engineers. The Department of Health and Human Services, in contrast, has a central agencywide small business office that is supported by a small business specialist in each of its 11 operating divisions, such as Centers for Disease Control and Prevention. Therefore, we interviewed small business officials from the department’s small business office, as well as the small business specialist and procurement officials at the Centers for Disease Control. NASA’s small business office is similarly centralized, so we conducted interviews with officials from this office only. We conducted our work between February 2005 and March 2006 in accordance with generally accepted government auditing standards. In addition to the individual named above, William R. Swick, Assistant Director; Doreen Feldman; Kevin Jackson; Carolyn Kirby; Michael L. Krafve; Harry Medina; Dominic Nadarski; Cynthia Norris; John W. Stambaugh; Stan Stenersen; and Virginia Vanderlinde made key contributions to this report.
Federal policy requires that small businesses receive the maximum practicable opportunity for providing goods and services to federal agencies through prime contracts--direct contracts between the government and a contractor. The Department of Energy (DOE) buys more than $20 billion in goods and services annually. GAO was asked to (1) discuss DOE's key efforts to increase small business prime contracting opportunities and (2) identify the management challenges DOE faces in improving its small business prime contracting performance. In addition to these objectives GAO is providing information on the management of small business programs by other federal agencies that either share certain characteristics with DOE's largest program offices or that have components that share certain characteristics with these offices. Key DOE efforts to increase small business prime contracting have included identifying more contracting opportunities for small businesses, expanding small business development and outreach activities, and increasing program management and oversight. The department has had some success in redirecting to small businesses portions of contracts to manage large DOE facilities, as well as in securing additional small business prime contracting opportunities from the department's other contracts. As a result, the total dollars awarded annually as prime contracts to small businesses have increased, and the share of procurement dollars awarded to small business in 2005 was DOE's second highest ever. Despite these gains, however, DOE was unable to meet its small business prime contracting goal in 4 of the past 5 years. DOE faces two key management challenges to improving its small business program. Addressing these challenges will bring DOE's small business program more in line with the practices associated with high-performing organizations and with principles contained in the Government Performance and Results Act. Specifically, DOE has not defined the concrete steps necessary to enable it to achieve its prime contracting goal and does not collect sufficient information to effectively assess its small business program efforts, identify problems, and implement changes that could further increase small business prime contracting. Other federal agencies with missions or agency components with missions similar to DOE periodically comprehensively evaluate their programs to determine effectiveness, identify problems and make changes intended to improve performance. GAO obtained information from the following three agencies: the National Aeronautics and Space Administration, the Department of the Army (U.S. Army Corps of Engineers), and the Department of Health and Human Services (Centers for Disease Control and Prevention).
IRS received approximately $197 million to implement 54 Recovery Act provisions. The Joint Committee on Taxation (JCT) estimated that these 54 provisions would cost about $325 billion between fiscal year 2009 and fiscal year 2019. (App. III shows the estimated cost of each provision.) JCT’s estimate can be reconciled with the Congressional Budget Office’s (CBO) estimate of $212 billion in reduced revenue and the administration’s $288 billion in tax relief shown on its Recovery Act Web site, recovery.gov. JCT’s estimate includes the effect on the budget of provisions of the act administered through the tax code, but CBO’s $212 billion estimate, which is based on JCT’s estimate, includes only the effect on revenue collections. The primary difference is that CBO’s estimate does not include some provisions that result in additional federal outlays rather than only reduced tax collections. This occurs under several provisions when taxpayers can receive a refund even if they do not have any tax liability. On recovery.gov, the administration includes both categories to arrive at its estimate of $288 billion in tax relief. The JCT estimate of $325 billion also includes the cost of COBRA and economic recovery payments provisions that IRS administers. IRS had a role in administering 54 Recovery Act provisions. However, IRS is not responsible for implementing all provisions included in the tax section of the Recovery Act, such as grants in lieu of credits and the New Markets Tax Credit. Grants were authorized because the effectiveness of particular credits was thought to be undermined by economic conditions; the grant provisions are administered elsewhere in Treasury. In a recent report, TIGTA’s count of Recovery Act provisions also differed from ours; for example, it included the New Markets Tax Credit, which is being administered by Treasury’s Community Development Financial Institutions Fund. To facilitate management of the provisions, as shown in figure 1, IRS grouped the provisions into six categories, with individual credits as the largest category by far in terms of dollars. A seventh category, withholding on government contractors, appears in the figure because it is being administered by IRS; however, IRS did not consider it to be a category because the Recovery Act only delayed the effective date for the withholding. The bulk of the stimulus to be provided by the tax provisions is expected to be in fiscal year 2010, as shown in figure 2. As previously mentioned, we focused our work on five provisions and performed limited work on a sixth. A brief description of each provision follows. (See app. I for a more thorough description of these provisions, including provision requirements.) Build America Bonds (BAB): BABs are taxable government bonds that can be issued with federal subsidies for a portion of the borrowing costs delivered either through nonrefundable tax credits provided to holders of the bonds (tax credit BAB) or as refundable tax credits paid to state and local governmental issuers of the bonds (direct payment BAB). Direct payment BABs are a new type of bond that provides state and local government issuers with a direct subsidy payment equal to 35 percent of the bond interest they pay. Tax credit BABs provide investors with a nonrefundable tax credit of 35 percent of the net bond interest payments (excluding the credit), which represents a federal subsidy to the state or local governmental issuer equal to approximately 25 percent of the total return to the investor. State and local governments may issue an unlimited number of BABs through December 31, 2010, and all BAB proceeds must be used for capital expenditures. Consolidated Omnibus Budget Reconciliation Act (COBRA): Recently extended, the COBRA provision originally provided a 65 percent health insurance subsidy for up to 9 months for individuals who lost health insurance coverage due to involuntary termination between September 1, 2008, and December 31, 2009. Former employers, or in some cases multiemployer health plans or insurers, pay 65 percent of insurance premium costs and are reimbursed through a tax credit against their payroll tax liability or through a tax refund if the credit exceeds their payroll tax liability. First-Time Homebuyer Credit (FTHBC): The Recovery Act expanded the FTHBC, which was initially established under the Housing and Economic Recovery Act of 2008, to provide taxpayers a refundable tax credit of up to $8,000 for the purchase of a home. Taxpayers are generally not required to repay the credit unless the home ceases to be the taxpayer’s principal residence within 3 years of purchase. Several of the issues with the FTHBC that are discussed in this report include differences between the 2008 and 2009 credits. The 2008 credit differs from the 2009 credit in that it provided taxpayers up to $7,500, which has to be repaid in $500 increments over the course of a 15-year period. We testified on the use of the FTHBC and implementation and compliance challenges in October 2009. Making Work Pay Credit (MWPC): The MWPC is a refundable tax credit that provides up to $400 and $800, respectively, to working individuals and married couples who file joint returns. Taxpayers may receive the credit throughout the year in the form of lower amounts of tax withheld from their paychecks. Taxpayers who do not have taxes withheld throughout the year will not benefit from the credit until they claim it on their annual tax return. Net Operating Loss (NOL) Carryback: The NOL carryback provision allows eligible small businesses—those that had a 3-year gross receipts average of no more than $15 million—to apply for a refund for taxes paid in up to 5 previous years if the business experienced a loss in 2008. Health Coverage Tax Credit (HCTC): The HCTC can be claimed (1) by workers who have lost manufacturing or service jobs due to international trade or have lost public agency jobs and are eligible for a form of Trade Adjustment Assistance or (2) by those who are receiving payments from the Pension Benefit Guaranty Corporation. Among other things, the Recovery Act increased the health insurance premium subsidy rate from 65 percent to 80 percent of the premiums for an eligible taxpayer’s qualified health insurance plan. While IRS has implemented major pieces of legislation in the past, the Recovery Act posed significant implementation challenges because it was a large piece of legislation and many provisions were immediately or retroactively effective and had to be implemented during the tax filing season—IRS’s busiest time of year. IRS officials moved quickly to implement the Recovery Act, aided by their efforts to organize internally and consult externally with lawmakers and industry groups before the act’s passage. IRS put its highest priority on implementing 14 of the 54 IRS provisions in 2009 because they required immediate action. Some of these provisions, such as the FTHBC and NOL carrybacks, were retroactive and could be claimed on a 2008 tax return. IRS started taking action on many of the remaining 40 provisions in 2009 as well. Some provisions affected the 2009 filing season (for tax year 2008), while others mainly will affect the 2010 and 2011 filing seasons. To implement the provisions, IRS quickly issued forms and guidance, communicated with taxpayers and tax return preparers, and made computer programming or processing changes. For example, within days of the act’s passage, IRS issued revised withholding tables for the MWPC and produced new or updated tax forms and instructions for it and COBRA. As shown in table 1, as of January 12, 2010, IRS either completed or initiated steps to issue new forms and instructions and revise many others for 48 of the 54 provisions, or 89 percent of the total. When we first checked on progress, as of August 20, 2009, or about 6 months after the Recovery Act’s enactment, the percentage was also greater than 80 percent. In addition, as of January 12, 2010, IRS communicated with taxpayers and tax return preparers through a variety of avenues, such as news releases, postings on irs.gov, podcasts, and You Tube videos for 47 of the 54, or about 87 percent of the provisions. IRS also made computer programming changes to enable processing for paper and electronically filed returns for 39, or about 72 percent of the provisions. IRS did not plan to engage in guidance and instruction for 6 provisions or education and outreach activities for 7 provisions, or make processing and programming changes for 15 provisions because, according to IRS officials, some did not require activities to inform the public or ready IRS systems. We agree with this decision because, as we saw, some of the tax provisions without implementation activities were extensions or expansions of previously existing tax provisions, modified previously existing tax rules, or gave additional guidance. For example, one provision amended the Work Opportunity Tax Credit (WOTC) to create two new categories of targeted groups, and as a result, IRS updated the instructions to the related tax form to be filed with IRS but did not isolate amounts related to these categories on the form itself or make any substantive processing and programming changes. IRS management had to make tradeoffs, often balancing other factors, such as not making some computer changes to collect data, against the need to quickly process claims and get tax information and assistance out to the public, in order to implement the Recovery Act. For provisions we reviewed in detail, IRS’s initial actions were at times later substantively adjusted. As a first example of a tradeoff, because IRS needed to quickly issue a new set of withholding tables so taxpayers could immediately benefit from the MWPC through reduced federal tax withholding, Treasury decided not to fully account for the effect of the MWPC on taxpayers whose incomes were in the MWPC phaseout range. As a result, taxpayers with incomes in the phaseout range did not receive a precisely calculated tax reduction. In November 2009, IRS issued new withholding tables for 2010 that included two new brackets to better recognize the effect of the MWPC on taxpayers in the phaseout range. The withholding changes for the MWPC may also have unfavorable consequences for some other taxpayers. For instance, TIGTA recently reported that over 15 million taxpayers, such as those receiving pensions and joint filers with two or more jobs between them, may be negatively affected by the MWPC. These taxpayers may owe taxes or receive a lesser or no refund because not enough taxes were withheld from their paychecks to satisfy their eventual tax obligation or maintain previous withholding levels. IRS and Treasury have taken steps to deal with potential underwithholding for these taxpayers. IRS has conducted outreach to encourage taxpayers to look more closely at their tax withholding and plans to do more outreach. IRS’s Web site contains publications and other guidance, including a tax withholding calculator, instructing taxpayers how to adjust their withholding in light of the MWPC. To address potential underwithholding for pensioners, Treasury developed supplemental withholding tables that pension administrators can use in conjunction with the previously issued tables to offset the effect of the MWPC. In addition, because of the MWPC, some taxpayers could be subject to tax penalties as a result of the credit. TIGTA estimated that over 1 million taxpayers could be assessed a tax penalty or have their tax penalty increased because of the MWPC. IRS has taken steps to address these concerns as well, as it will allow taxpayers to use Form 2210, “Underpayment of Estimated Tax by Individuals, Estates, and Trusts,” to request that the penalties be waived. IRS has alerted taxpayers to this option and how to exercise it by adding information to the instructions for the Form 1040, “U.S. Individual Income Tax Return,” and, according to TIGTA, also plans to add information to Publication 505, “Tax Withholding and Estimated Tax.” A second tradeoff involved the FTHBC. Because of the compressed time to implement the revised credit, IRS did not make computer changes to easily collect data, including the home purchase date, from the Form 5405, “First- Time Homebuyer Credit,” the form used to claim the credit. This was problematic because without the home purchase date IRS was unable to easily distinguish 2008 and 2009 FTHBC claims, a problem we noted in our October 2009 testimony. Distinguishing between the two credits is critical because the acts establishing them contain different requirements, including whether and how the credit is to be repaid. In studying FTHBC claims, TIGTA found that, as of May 29, 2009, IRS had not properly categorized more than 43,000 returns, considering them as 2008 claims instead of 2009 claims. According to TIGTA, if further action is not taken, some taxpayers who bought a home in 2009 could receive a letter from IRS incorrectly indicating that they must repay the credit. IRS plans to verify the date of purchase on past claims and make any necessary adjustments when it begins enforcing the 2008 FTHBC repayment provisions. IRS also plans to make the computer changes needed to collect all significant data for 2009 claims, including home purchase date, from a revised Form 5405. As a third example of a tradeoff, because of the limited time to make necessary computer programming changes that would have enabled payments by direct deposit, IRS issued BAB direct payments by paper check instead of electronic payments. According to IRS officials, IRS’s use of paper checks possibly increased the costs of issuing BAB direct payments at least nominally. For 2010, IRS plans to change Form 8038-CP, “Return for Credit Payments to Issuers of Qualified Bonds,” to include bank routing numbers so that payments can be made electronically. Also, because of the limited time, IRS used an existing tax-exempt government bond form, Form 8038-G, “Information Return for Tax- Exempt Government Obligations,” for state and local governments to report 2009 BAB information. Governmental issuers were required to submit a copy of Form 8038-G, to identify the issue as a BAB and to record information on the issue price, weighted average maturity, yield percentage, and the use of bond proceeds. However, unlike what is required for other bond issues, BAB issuers were also required to attach a separate schedule to identify the type of bond issue. For 2010, IRS plans to use a new form specifically designed for BABs. The new form will collect the same information that was collected in 2009, but issuers will be able to identify the type of BAB and provide information on the type of bond issue all on the form itself, requiring no attachment. Another effect of the limited time available to implement BABs was that BAB work took priority over already-existing bond projects, delaying the other projects somewhat, according to Treasury and IRS officials. As a final example, IRS also made tradeoffs when implementing NOL carrybacks. As soon as the Recovery Act was enacted, taxpayers began filing 3-, 4-, and 5-year NOL carryback claims, which allowed them to use 2008 small business losses to reduce taxable income from 3, 4, or 5 years before and get tax refunds quickly. Because taxpayers made claims before IRS issued its guidance on March 16, 2009, taxpayers made what IRS officials considered invalid or unclear elections on their NOL carryback claims. Despite the March 16th guidance, taxpayers continued to file unclear carryback claims because they appeared to have not followed the instructions in the guidance, which told taxpayers to attach a statement to their tax return indicating certain information. Officials processed the claims and decided to issue on May 11, 2009, a second piece of guidance superseding the March 16th one in order to make the process easier for taxpayers. The May 11th guidance reduced the burden on taxpayers by allowing them to file the appropriate NOL carryback form without having to attach an election statement. By reviewing the computations on the appropriate form, IRS was able to find the information it needed to process the claim. When processing takes more than 45 days, IRS has to pay interest on NOL refunds, just as it must for other refunds taking more than 45 days to process. In order to process the claims on time, IRS initially took one to two revenue agents at seven campus locations away from examination cases for 1 to 2 days per week to determine whether small businesses’ 3- year average gross receipts were under a $15 million ceiling, making them eligible for the NOL carryback refunds. This lasted for about 2 months until IRS developed a Gross Receipts Average Calculator tool. This tool replaced the need for extensive revenue agent involvement by automatically calculating a taxpayer’s average gross receipts. Collecting data on the tax provisions is important to (1) ensure Recovery Act funds are used efficiently, (2) ensure program compliance, and (3) determine program effectiveness. Without appropriate data, it may be either impossible or costly to determine the extent to which the tax benefits were used, when they were used, whether they were used effectively and as intended, and whether any lessons could be learned. In previous reports, we noted that IRS did not collect sufficient information to determine the use and effectiveness of certain tax provisions. In the report on Indian reservation depreciation, we noted that the lack of sufficient data impeded IRS’s ability to ensure program compliance. In the past, IRS officials said that IRS’s role is to collect data only to the extent that the data help it to administer the tax code. However, for the Recovery Act, IRS went beyond its typical efforts in order to provide transparency over the use of the tax provisions and to collect more reportable data on the tax provisions. For example, IRS did not collect any additional data related to the 5-year NOL carryback for the Job Creation and Worker Assistance Act of 2002, but it is collecting carryback data for the Recovery Act’s NOL provision. Throughout the year after the Recovery Act was enacted, IRS developed plans for collecting data. For example, as shown in table 2, of the 54 provisions that IRS has a role in implementing, IRS had detailed data- collection plans for 17, or about 31 percent. These 17 provisions cover about $207 billion, or 63 percent, of the $325 billion total cost that JCT estimated for the 54 provisions. For about 33 percent of the 54 provisions, covering about $96 billion, or 30 percent, of the total cost, IRS had identified preliminary data sources, but not what, if any, data it will compile and report. For the remaining 19 provisions covering about 7 percent of the total cost, IRS does not plan to compile or report any data. For 9 of these provisions, covering about $5 billion, or 1 percent of total cost, IRS officials stated that no data are available on tax forms and IRS does not plan to modify tax forms to enable data collection. As an example, the WOTC is included in this group. The Recovery Act expands the credit by adding two categories of eligible individuals. IRS does not plan to modify the tax form to enable data collection on the newly eligible individuals as it currently does not collect data on any of the eligible individuals. For the final 10 provisions, covering about $18 billion or 6 percent of total cost, IRS did not plan to collect data or report any information because, according to IRS officials, this category included 9 provisions relating to guidance or reflecting rule changes. For the other provision, IRS had a minimal role, and Treasury’s Financial Management Service (FMS) took the lead in administering it. We agree with IRS’s decision not to report on these 10 provisions. Very little of the data that IRS has collected on the tax provisions has been released publicly. On September 3, 2009, Treasury released preliminary data collected on its Recovery Act programs, including the use of BABs, economic recovery payments, and the FTHBC. Recovery.gov includes a chart on the estimated dollars distributed through the tax provisions, but this estimate is not based on actual provision use. Rather, it prorates estimates that were created by Treasury’s Office of Tax Analysis before the act’s implementation. Data collected for individual tax provisions that specify the number of provision users and dollar amount of claims made were not reported on recovery.gov, as of January 20, 2010. During much of our review, IRS focused on collecting and internally reporting data on four provisions—BABs, COBRA, the FTHBC, and the HCTC. (See app. IV for information on the use of these provisions.) Some of the data IRS collected did not accurately capture taxpayers’ use of Recovery Act provisions, as provision use was sometimes incompletely described or overstated. IRS’s reporting requirements for BABs are minimal in contrast to requirements for Recovery Act infrastructure and other direct spending projects, even though such projects may be similar. For example, funding for both Recovery Act spending projects and BABs may be used for highway, school, water, sewer, or utility improvements. Currently, IRS requires state and local governments to submit an information return at the time of bond issuance that describes the type of bond issue, issue price, weighted average maturity, yield percentage, and the use of bond proceeds. As shown in appendix IV, as of January 1, 2010, state and local governments reported 443 BAB issuances valued at about $32.4 billion. One hundred and thirty-one BABs were issued for education, which was more issuances than for any other type, except the “other” category. Spending projects undertaken under the Recovery Act by state and local governments are subject to additional reporting requirements. Section 1512 of the act requires nonfederal recipients of Recovery Act grants, contracts, and loans to provide information on each project or activity, including a description of the project and its purpose, an evaluation of its status toward completion, the amount of recovery funds spent, and the number of jobs created and retained. The reporting required for spending projects is intended to increase accountability and transparency. In addition, federal agencies are required to submit reports that describe the amount of Recovery Act funds made available and paid out to the states through contracts, grants, and loans. Although IRS is not required to publicly report data on BAB use, doing so could increase accountability and transparency. As part of its bond outreach efforts, Treasury has asked governmental issuers to report how their government has used BABs. Treasury plans to compile this information for its internal use only. There are no efforts by other federal agencies to compile or publish BAB information. The limited data collected and publicly reported for BABs does not reflect the same emphasis as that for spending projects. According to the Director of IRS’s Tax-Exempt Bonds division, more detailed information reporting on BAB-financed projects, such as that required on the Schedule K of Form 990, “Supplemental Information on Tax-Exempt Bonds,” may increase compliance over the life of bonds because government issuers would be reminded of bond requirements each year when filing the form and would be more likely to keep and maintain required documentation. Charitable organizations are required to submit Schedule K annually with their tax returns, although no similar yearly bond reporting requirement exists for governmental bond issuers. The rationale for the Schedule K was that significant noncompliance with recordkeeping requirements for charitable organization tax-exempt bonds existed, making it hard for IRS to determine if the bonds remained qualified for tax exemption throughout their life. Accordingly, Schedule K and its instructions ask for a description of the bond’s purpose— constructing a hospital or acquiring office equipment are examples cited— and the year the project was substantially completed, just like the reporting requirements for Recovery Act spending projects. IRS officials said that yearly BAB reporting similar to the Schedule K would help IRS know whether bonds remained qualified for their tax- advantaged status. One hundred percent of BAB proceeds are to be used for qualified capital expenditures, and yearly reporting by governmental issuers would allow IRS to more easily identify issuers who have not adhered to this standard or maintained the required documentation to show how bond proceeds were used. It could also help lawmakers or others in determining the overall effectiveness of the newly created bonds. Any additional cost of reporting, such as those already borne for spending projects, could be tempered by having a minimum reporting threshold or delaying the onset of requirements, as was the case when reporting for charitable organizations was instituted. However, if IRS required more specific reporting for BABs, it could not publicly release it for individual issuers. Currently, unlike the case with the Schedule K, IRS is prohibited from disclosing BAB-related information collected by IRS. Legislation would be needed to allow the BAB information and any similar information related to governmental bonds to be disclosed. According to Treasury officials, given the periodic direct payments that IRS must make for the first time to state and local governments for BABs, ongoing safeguards are needed to verify that payments are only made on outstanding bond issues that continue to meet BAB eligibility requirements. To this end, IRS and Treasury have a working group to examine different approaches for acquiring BAB information over the life of bonds to verify payments and determine how frequently additional bond information reporting by issuers should occur. The working group may suggest new bond reporting requirements, such as a new tax form, in late 2010. More detailed reporting on BABs to provide added transparency and accountability could help with this effort and be beneficial if it were compatible with other needs identified by the working group. As of December 18, 2009, IRS had not reported the number of former employees receiving COBRA premium assistance. When the data are ready, IRS information on COBRA premium assistance claims will understate the total number of individuals receiving health insurance. Employers are instructed to list the number of individuals provided COBRA premium assistance on Form 941 for 2009, “Employer’s Quarterly Federal Tax Return.” However, the number entered on the form is only the total number of former employees receiving COBRA coverage and does not include their dependents who may be covered under the same insurance plan. For example, if COBRA premium assistance was paid for an insurance plan that covered a former employee and his or her spouse and child, an employer would count that as one person provided COBRA premium assistance on Form 941, not three. Counting this way prevents a meaningful comparison with the JCT’s estimate that 7 million workers and dependents would use the COBRA subsidy. Moreover, the number does not provide stakeholders complete information on provision use. According to IRS officials, the form did not include dependents due to a short time frame for implementation, space constraints on the form, and a desire not to overburden employers with additional reporting requirements. As of December 26, 2009, as shown in appendix IV, IRS had received approximately 192,000 returns from employers claiming about $803 million in COBRA credits. Before September 30, 2009, IRS’s 2009 FTHBC information was understated—it did not show the full dollar amount of the credits claimed. Initially, IRS only reported the difference between the maximum benefits of the 2008 and 2009 FTHBCs as the total benefit of the 2009 FTHBC. That is, the data only reflected an estimated increment above the 2008 FTHBC’s maximum benefit of $7,500 as the amount of credit claimed for the 2009 FTHBC, rather than up to the full amount of the credit, a maximum of $8,000. In a September 3, 2009, report on the Recovery Act, Treasury also reported this estimated incremental amount. On September 4, 2009, we pointed out to IRS officials that the increment did not consider that taxpayers might have decided to buy a house because the $8,000 maximum benefit offered with the 2009 FTHBC generally would not have to be paid back to the federal government—unlike the 2008 credit for which the $7,500 would have to be repaid over 15 years. IRS revised its data and provided information reflecting the full amount of the credit claimed in a September 30, 2009, report. As shown in appendix IV, as of November 21, 2009, IRS data show that about 1.1 million filers claimed about $7.3 billion of the 2008 credit, while about 630,000 filers claimed about $4.7 billion of the 2009 FTHBC. The roughly 12,000 new enrollees in the HCTC program that IRS has attributed to the Recovery Act is overstated, at least for the early period soon after the Recovery Act. (App. IV provides more detailed data.) The number is overstated because it included some taxpayers who were in the pipeline for enrollment before the President signed the Recovery Act on February 17, 2009. IRS officials chose April 1, 2009, as the first date for attributing HCTC participation to the act. They chose this date because, among other things, it was the date the health insurance premium subsidy rate rose from 65 percent to 80 percent, and presumably more people would be inclined to enroll given the higher subsidy. However, according to June 2008 IRS data, IRS would only mail HCTC program kits and registration forms to taxpayers after other agencies spent 1 to many months determining if a taxpayer was in fact eligible for the credit in the first place. Thus, to be enrolled in April 2009, many taxpayers would have to have started the HCTC process before the Recovery Act was signed. IRS officials acknowledged some of the enrollees counted as new could have been in the pipeline for enrollment on February 17. Officials also said their collection of data in general was not intended to see whether the Recovery Act actually motivated someone to change behavior, in this case to enroll in the HCTC program. The data IRS has collected about the tax provisions it is administering are not designed to isolate or differentiate the stimulus effect of these provisions from that of other Recovery Act provisions. To assess the effects of stimulus policies such as tax incentives, economists use evidence from macroeconometric forecasting models and models that extrapolate from historical data. The forecasting models are based largely on historical evidence, and the analyses estimate behavior based on how economic variables such as gross domestic product (GDP) have responded to stimulus policies in the past. Neither type of model uses current data to assess the effect of the stimulus. The models are used to estimate “multipliers,” which represent the cumulative effect of a particular incentive, such as a tax cut, on GDP over time. For example, a multiplier of 1.0 means a dollar of stimulus financed by borrowing results in an additional dollar of GDP. Generally, multipliers can provide insights into the potential effect on GDP of different types of public spending. Because of the limited historical experience with a fiscal stimulus of the magnitude of the Recovery Act, there is uncertainty about the extent to which multipliers based on historical evidence about the effect of previous business cycles will accurately reflect the stimulus effect this time. However, economists use the models as a basis for constructing reasonable ranges of values for multipliers. Drawing on analyses based on past experience with the results of government spending, CBO has estimated multipliers for Recovery Act provisions that include tax expenditures (see table 3). Although the economic effect of each of the Recovery Act tax provisions cannot be precisely estimated, the effect of some provisions on specific aspects of the economy may be described in general terms. For example, reports released by the Executive Office of the President’s Council of Economic Advisers (CEA) in September 2009 and January 2010 noted the potential effect of the bonus depreciation, MWPC, and FTHBC provisions. According to CEA’s analysis, the bonus depreciation provision, which allows businesses to recover the costs of acquired property at a faster rate than they otherwise would, benefited businesses and may have led to a slower investment decline in the second quarter of 2009 than would have occurred in the absence of such provisions. Additionally CEA concluded that although the MWPC, along with other provisions of the Recovery Act and other economic recovery policies, helped stabilize consumption, a small drop in consumption in the second quarter could indicate that households were using the MWPC mainly to increase savings and pay off debt. In addition, their analysis suggests that, in addition to other policy actions affecting residential real estate, the Recovery Act’s FTHBC may have moderated job losses in the construction industry. We have previously reported that evaluating risk is important because it allows an organization to identify potential problems before they occur so that mitigating activities can be planned and implemented over a project’s life to minimize adverse effects on objectives and outcomes. Risk management includes executive oversight, preparing for risk management, identifying and analyzing risks, and mitigating risks. Organizations prepare for risk management by establishing a strategy for identifying, analyzing, and mitigating risks. Identifying and analyzing risks involves identifying risks from internal and external sources and evaluating each risk to determine its likelihood and consequences. Mitigating risks involves developing risk-mitigation plans that outline the techniques and methods that will be used to avoid, reduce, and control the probability of risk occurrence. Consistent with these activities, IRS established an executive steering committee to oversee Recovery Act implementation. The committee, which was formed before the act’s enactment, included the heads of all IRS operating divisions. It met regularly to discuss issues such as the resources needed to implement the tax provisions, changes to be made to forms and information systems, information to be posted on the Internet, and compliance challenges. IRS also completed eight risk assessments—questionnaires that identified potential risks, their likelihood of occurrence, and their effect—that covered 12 provisions immediately available to taxpayers. The risk assessments considered common risk areas such as the adequacy of internal control procedures, agency-specific risks such as the extent of management oversight over the risk-management process, and program- specific risks such as resource availability. The risk assessments resulted in 9 of the 12 provisions being considered as medium risk and 3 as low risk. IRS plans to reevaluate these assessments and complete assessments of the remaining Recovery Act tax provisions in 2010. Following Treasury policy, IRS completed mitigation plans for the 9 provisions it found to be at medium risk. The mitigation plans outlined the actions that IRS planned to take to address identified risks, and IRS program officials were responsible for monitoring their implementation. In addition, Treasury began reviewing IRS risk-mitigation plans in July 2009 and told us in January 2010 it planned to begin reviewing risk assessments and mitigation plans for tax year 2010 provisions that month, taking into account GAO, TIGTA, and Treasury Office of Inspector General findings. Despite its efforts to assess and mitigate potential risks, IRS still encountered compliance challenges with the FTHBC; it addressed some of them. For example, IRS used prerefund filters to ensure that taxpayer income and the amount of credit claimed on a return do not exceed statutory limits. IRS also used an electronic fraud-detection system with filters to detect and prevent fraudulent refund schemes. However, IRS and TIGTA reviews of early FTHBC filings identified additional compliance issues, such as instances where taxpayers who had previously owned a home claimed the credit. Based on its review of early filings, IRS implemented additional computer filters to better determine taxpayer eligibility before refunds were issued. For example, IRS developed filters to check for indications of prior homeownership within the past 3 years. As a result of its prerefund checks, as of February 1, 2010, IRS had frozen about 140,000 refunds pending civil or criminal examination, and, as of December 2, 2009, had identified 175 criminal schemes and had 123 criminal investigations open. IRS faces a challenge in knowing whether homebuyer credit recipients sell their homes. This is important for the 2009 credit because the Recovery Act requires that at least part of the credit up to $8,000 may have to be repaid if a home is sold or otherwise ceases to be the taxpayer’s principal residence within 3 years of purchase. Repayment is also an issue for the 2008 FTHBC, as individuals who sell their homes or otherwise cease to use their home as their principal residence before fully repaying their credit up to $7,500 have to accelerate their repayment. IRS modified Form 5405 for taxpayers to report the disposition of their home or a change in its use, but as of December 2009 it had not decided how it will identify individuals who fail to report. An IRS form already exists that could help resolve this compliance issue, but whether IRS is authorized to use it for this purpose would have to be determined. Currently, IRS annually receives some Forms 1099-S, “Proceeds from Real Estate Transactions,” from agents closing real estate transactions such as home sales. The form provides information such as the seller’s name and Social Security number and the sale price of the home and is to be used by IRS to determine if taxpayers have filed returns and reported all of their proceeds from real estate transactions. However, closing agents are generally exempt from reporting information on the sale of principal residences sold for $250,000 or less if the agent receives written certification from the seller that certain assurances are true. Moreover, it is not clear whether IRS has the authority to require Form 1099-S be filed by third parties currently exempted for purposes of recapture from FTHBC recipients. If Form 1099-S information reporting could be required for all home sales or for those taxpayers who do not certify that they had not claimed the FTHBC, IRS might be better able to identify the taxpayers who need to repay part or all of the credit. Because Form 1099-S contains the seller’s Social Security number, IRS could match the identification numbers on the Forms 1099-S to those reported on returns claiming the FTHBC, isolating Form 1099-S filers who should have reported their home sale on the FTHBC form, but did not. As we were completing our review, IRS officials identified an alternative way to analyze whether homebuyer credit recipients sell their homes and are, therefore, possibly subject to payback requirements. This alternative involves acquiring access to third-party data in the form of publicly available real estate information from local governments. This information could include individual properties’ addresses, previous and recent sales prices, and sales dates. IRS could use these data in matches against Form 5405 or other IRS data to identify taxpayers who claimed the FTHBC and then sold their property without repaying any required part of the FTHBC benefit they received. IRS expects to purchase the use of these data, use them, and then evaluate how well they help IRS enforce the FTHBC provisions. The evaluation is not yet designed but should be able to cover issues like data reliability, comprehensiveness, and cost-effectiveness. The Recovery Act provides eligible taxpayers with COBRA premium assistance—a 65 percent reduction in health insurance premiums for individuals who were involuntarily terminated between September 1, 2008, and December 31, 2009. An employer pays 65 percent of its former employees’ insurance premium costs and is reimbursed in the form of a payroll tax credit. This tax provision is only the second refundable tax credit administered by IRS’s Small Business/Self-Employed (SB/SE) division. Unlike the other credit, SB/SE’s compliance strategy for COBRA focuses on the employer and the Form 941, not on individuals receiving assistance. To identify fraudulent or erroneous COBRA claims made by employers, IRS instituted a number of prepayment checks, such as looking for irregularities in COBRA claims and in the dollar value of subsidies. As of September 22, 2009, the prepayment checks had stopped about 1,500, or 2 percent, of COBRA claims for further review. Other compliance challenges have not been resolved. For example, IRS does not know who receives the COBRA subsidies, which limits its ability to determine if a taxpayer is qualified to receive a subsidy and to ensure that employers do not receive the credit for ineligible individuals. In an effort to reduce employer burden, IRS did not require employers to submit lists of all people receiving COBRA. As a result, it was only aware of the number of individuals an employer reported on Form 941 and the total amount of the subsidy claimed. Employers are required by IRS to keep records of the COBRA assistance, including the names and Social Security numbers of covered employees, but IRS would see this information only during any examinations. Another challenge facing IRS is verifying that those taxpayers who are required to repay part of the COBRA subsidy they receive do so. Those individuals and married couples filing joint tax returns with modified adjusted incomes above $125,000 and $250,000, respectively, are required to report on their tax returns that they received COBRA assistance. This requirement is in place because the COBRA subsidy phases out for those taxpayers with higher incomes, and those above the phaseout range are ineligible. IRS plans to conduct a review of filed returns to identify high- income taxpayers who did not report the subsidy as an addition to tax. However, rather than rely solely on audits to determine if these taxpayers are subject to additional tax, IRS has taken some steps to obtain this information. For example, IRS worked with tax-preparation software companies to ensure that pointed questions are asked during tax return preparation to determine if individuals received COBRA during the year. IRS also has plans to use a compliance initiative project to test whether taxpayers did not report the subsidy as an addition to tax and decide if further action is needed. Individuals are allowed to receive a COBRA subsidy for up to 9 months after their involuntary termination, but, since IRS does not know from the Form 941 who is receiving COBRA subsidies, it also does not have the information to know when an individual’s eligibility period ends. Claims beyond 9 months may not be widespread because some studies have shown that, even with the subsidy, COBRA is generally more expensive to employees than employer-sponsored plans. Thus, in most circumstances, individuals have an incentive to terminate their COBRA coverage when other options exist. However, employers may have an incentive to continue claiming the credit even when former employees are no longer eligible. A past report of ours noted that businesses facing economic hardship may take advantage of the tax system by diverting payroll taxes for their own uses. Employer audits are one of the ways IRS learns if an employer claimed the credit for employees for longer than 9 months. IRS will not be able to audit all employers. To address this concern, IRS has conducted outreach with the employer and payroll communities emphasizing the time limit and planned to continue doing so in the coming months. Yet, other than relying on costly audits, IRS had not finalized actions that it could take to ensure employers stop claiming the credit when their former employees are no longer eligible, thus safeguarding against invalid COBRA claims that increase costs to the federal government. A cost-effective option to help IRS with the unresolved compliance issues exists. IRS could expand its planned compliance initiative project to test whether employers are claiming COBRA subsidies for employees for longer than 9 months, or 15 months when considering the recent extension. IRS can use existing information to determine if significant noncompliance with the 15-month provision is apparent. If significant indications of noncompliance are found, IRS could issue “soft notices” to employers to remind them of COBRA eligibility requirements and the consequences of noncompliance. IRS officials responded favorably to these ideas and said they would consider adopting them. IRS plans to do a “lessons learned” review of its Recovery Act experiences and implementation, most likely after the 2010 filing season, but it had not yet developed detailed plans during our review. This study would be consistent with a recommendation we have previously made. In an August 2002 report on the advance tax refund program, which the Congress designed to stimulate the economy, we noted that analysis is a key part of understanding performance and identifying improvement options. We therefore recommended that IRS convene a study group to assess its performance with respect to the advance tax refund and related rate- reduction credit. We also said that to ensure that managers faced with similar challenges in the future have the benefit of this assessment, the results should be thoroughly documented. IRS implemented this recommendation and later said that the resulting internal report was a cornerstone in improving administration of the advance child tax credit. IRS would have benefited from having math error authority (MEA) to enforce at least one Recovery Act provision from the outset rather than only after problems were identified. The Internal Revenue Code provides IRS with MEA to assess additional tax or correct other tax return errors in limited circumstances when an adjustment is the result of mathematical or clerical errors on the return. Over the years, the Congress has granted IRS MEA for specified purposes. For example, when a taxpayer makes an entry on a tax return for a deduction or credit in an amount that exceeds the statutory limit for that deduction or credit, IRS uses its MEA to correct the error during tax return processing. MEA is an automated and low-cost means to protect federal revenue and avoid the need for costly audits. This is due, in part, to the fact that IRS does not have to follow its standard deficiency procedures when using MEA—it must only notify the taxpayer that the assessment has been made and provide an explanation of the error. As described earlier, IRS had problems enforcing some of the eligibility requirements of the FTHBC. After learning about the compliance problems with the FTHBC, the Congress expanded IRS’s MEA in the Worker, Homeownership, and Business Assistance Act of 2009. It followed our suggestion that, to reduce IRS’s reliance on costly and burdensome audits of the credit, the Congress should consider providing IRS with additional MEA. Specifically, we suggested that the Congress consider giving IRS MEA to use tax return information to automatically verify taxpayers’ compliance with the 2008 FTHBC payback provision and to ensure that taxpayers do not improperly claim the credit in multiple years. In addition to following these suggestions, based on noncompliance identified by TIGTA, the Congress granted IRS MEA to assess additional tax without the notice of deficiency otherwise required if a taxpayer did not meet the credit’s age requirement or did not submit the settlement statement used in the home purchase. The Congress has been incrementally adding MEA authorizations for almost a century. The first basic exemption to the deficiency procedures for mathematical errors can be found in the Internal Revenue law in 1926. In 1976, the Congress expanded the authority beyond mathematical errors to clerical errors and gave taxpayers the right to ask that IRS reverse the math error assessment and follow IRS’s normal deficiency procedures. In the 1990s, the Congress extended MEA multiple times to help IRS determine eligibility for certain tax exemptions and credits. As a recent example of where MEA could also be useful, in 2008 we suggested that the Congress provide IRS with the authority to automatically correct returns for individual retirement account (IRA) contributions that violated certain dollar or age limits. In 2004, IRS had found IRA contribution overclaims by taxpayers under age 50 resulting in $23.2 million in underreported taxes but did not have the MEA to use age-based data to check for age-based eligibility. Also, on September 30, 2009, after finding more than $600 million of inappropriately claimed Hope Credits for higher education, TIGTA recommended that the Congress give IRS MEA to disallow claims for the Hope Credit for more years than allowed by law. In a November 2009 report, TIGTA listed four examples of other reports it had issued in fiscal years 2008 and 2009 with issues related to MEA, three recommending that specific MEA be obtained or studied. Authorizing the use of MEA on a broader basis rather than case-by-case, with appropriate controls, could have several benefits to IRS and taxpayers. It could enable IRS to correct all or nearly all returns with types of noncompliance for which IRS identifies with virtual certainty the noncompliance and the needed correction, not just those it can address through other enforcement means; be low cost and less intrusive and burdensome to taxpayers than audits; ensure that taxpayers who are noncompliant on a particular issue are more often treated alike, that is, that a greater portion of them are brought into compliance, not just those that IRS could otherwise address; enhance equity between compliant and noncompliant taxpayers because a greater portion of the noncompliant ones would be brought into compliance; provide a taxpayer service as it would generally allow noncompliant taxpayers to receive their refunds faster than if IRS had to address the error through some other compliance mechanism, have their returns corrected without penalty and before interest is accrued, and avoid time- consuming interaction with IRS under its other programs for resolving noncompliance; help ensure taxpayers receive the tax benefits for which they are eligible by identifying taxpayers underclaiming a tax benefit; free up IRS resources to pursue other forms of noncompliance; and allow IRS to quickly address provisions arising from new and quickly moving initiatives like the Recovery Act without waiting for new MEA to go through the legislative process. Broader authority to use MEA could take several forms; for instance, it could be granted for (1) new legislation that had to be implemented in short time periods, (2) newly created or revised refundable credits, or (3) wherever IRS could check for obvious noncompliance in both new legislation and already enacted laws. Refundable credits, which entail cash payments to taxpayers irrespective of the amount of their tax liabilities, are growing in popularity and automatic authority could enable IRS to monitor low-dollar amounts on individual returns that would be too labor- intensive and costly to audit. Although broader MEA could benefit IRS and taxpayers, controls may be needed to ensure broader authority is properly used. While stating that IRS generally uses its authority properly, the IRS National Taxpayer Advocate’s 2006 annual report warned of IRS’s implementation of MEA impairing taxpayer rights. The Taxpayer Advocate pointed out that in considering the 1976 legislation mentioned above, the Congress was concerned that IRS might use its authority in ways that would undermine taxpayer rights. Consequently, the Congress incorporated certain taxpayer safeguards into the legislation, such as requiring IRS to explain to the taxpayer the errors it asserted. Still, the Taxpayer Advocate was concerned that taxpayers, especially low-income taxpayers, might not proactively ask, within 60 days after being assessed tax by IRS, to have their assessment reversed by IRS, and thus might be unable to challenge an IRS notice through normal deficiency procedures or in the Tax Court. She was also concerned that MEA notices to taxpayers did not contain the type of information the Congress envisioned that clearly explained to taxpayers the nature of the error that IRS addressed through MEA. The Taxpayer Advocate’s 2002 annual report recommended that the Congress specifically limit the scope of the assessment authority for mathematical or clerical errors and provide standards by which to judge any proposed expansion of this authority. The Taxpayer Advocate said that MEA should be limited to situations where there are inconsistent items and the inconsistency is determined from the face of the return; where required items, such as schedules, were omitted from the tax return; and where items on the return are numerical or quantitative. With these or other standards in mind, the Congress could extend broader MEA to IRS but could specify criteria governing when IRS could use the authority and require other controls as well. For example, the Congress could require IRS to submit a report on a proposed new use of MEA. The report could include how such use would meet the standards or criteria outlined by the Congress. The report could also describe IRS’s or the Taxpayer Advocate’s assessment of any potential effect on taxpayer rights. Or, the Congress could require a more informal procedure whereby IRS simply notifies a committee, such as JCT, of its proposed use and subsequently submits a report after such use is underway. In any case, the Congress could provide IRS broader authority to use MEA than is currently authorized, but still provide appropriate safeguards by outlining criteria and guidelines and requiring IRS to report in order to alleviate concerns of improper use of MEA. A year since the passage of the Recovery Act, IRS’s quick implementation has allowed billions of dollars to be available to bolster the struggling U.S. economy. In the face of significant challenges posed by the Recovery Act, IRS traded off the requirements for quick implementation against the needs to collect proper data and enforce compliance with tax laws. As IRS gained experience with Recovery Act implementation, it at times substantively adjusted its approach for specific provisions. Following through on its stated intention to capture the lessons it learned from the overall experience would help IRS the next time it is charged with similar tasks. Similarly, the data-collection and enforcement framework IRS has created allows room to enhance the data it collects for BABs and to strengthen the foundation for enforcing COBRA and FTHBC provisions. In terms of the FTHBC, we are making no recommendations concerning the payback feature because late in our review IRS identified a potentially promising alternative that it expected to pursue to enforce it. This alternative will bear watching, and we look forward to IRS assessing how well it will work. Finally, receiving broader MEA, with appropriate safeguards, from the Congress would give IRS the flexibility to respond quickly as new uses emerge in the future. The Congress should consider the following: Granting IRS the authority to publicly release information on Build America Bonds (BAB), such as project purpose, beginning and ending dates, and costs; this approach would be broadly consistent with the Recovery Act reporting and transparency provisions for direct spending programs. Broadening IRS’s ability to use math error authority (MEA), with appropriate safeguards against misuse of that authority. We recommend the Commissioner of Internal Revenue take three actions: Require governmental issuers to submit additional information on Build America Bond (BAB)-financed projects, including information on project purpose, beginning and ending dates, and costs. This reporting could be similar to the bond reporting required for charitable organizations on the Schedule K of Form 990, “Supplemental Information on Tax-Exempt Bonds.” Should the Congress grant the authority, IRS should publish the information in a report available to the public. Direct officials to conduct a compliance initiative project to determine if individuals are receiving COBRA or employers are claiming individual COBRA subsidies for longer than 15 months. IRS can use existing information to determine if significant noncompliance with the 15-month provision is apparent. If significant noncompliance is found, IRS should issue soft notices to all employers to remind them of COBRA eligibility requirements and urge them to correct errors that may have been made. Prepare a report detailing the lessons learned from its Recovery Act experiences and implementation and publish the results of its review, in line with the Recovery Act’s emphasis on transparency. We received written comments on a draft of this report from the Commissioner of Internal Revenue on February 4, 2010 (for the full text of the comments, see app. V). He agreed with the benefit of one of our recommendations and agreed fully with the other two. In agreeing that IRS compliance efforts would benefit from requiring more information from issuers of Build America Bonds, he noted that the benefit would have to be balanced against the burden imposed on state and municipal governments issuing the bonds. As we said in our report, any additional cost of reporting could be tempered by having a minimum reporting threshold or delaying the onset of requirements, as was done when similar reporting for charitable organizations was instituted. The Commissioner recognized, as we had, that IRS would need statutory authority before it could make the information public. He said that if granted that authority, IRS stood ready to implement the recommendation. In agreeing with our other recommendations, the Commissioner said that IRS (1) has plans in place to do a compliance project to test the 15-month COBRA rule, and (2) will review and publish a report on lessons learned from IRS’s management and implementation of the Recovery Act. The Commissioner added that in those cases in which additional math error authority could be effectively deployed, IRS would welcome it. He said IRS looked forward to discussing the issue in more detail as the Congress considers any new tax legislation. We also received technical comments on a draft of this report from Treasury’s Acting Tax Legislative Counsel and made changes where appropriate. We plan to send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report will also be available at no charge on GAO’s Web site at www.gao.gov. For further information regarding this report, please contact me at (202) 512-9110 or at brostekm@gao.gov. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last pageof this report. Individuals making key contributions to this report may be found in appendix VI. The sections below provide background and describe requirements of the five provisions we selected to review in detail, as well as the Health Coverage Tax Credit (HCTC). Appendix II details our objectives, scope, and methodology, including why we selected each provision. BABs are taxable government bonds, with federal subsidies for a portion of the borrowing costs, that state and local governments may issue through December 31, 2010. BAB subsidies can be either nonrefundable tax credits provided to holders of the bonds (tax credit BABs) or refundable tax credits paid to state and local governmental issuers of the bonds (direct payment BABs). Direct payment bonds are a new type of bond that provides state and local government issuers with a direct subsidy payment equal to 35 percent of the bond interest they pay. Because of this feature, state and local governments are able to offer the bonds to investors at a higher interest rate than they can with tax-exempt bonds. Direct payment BABs may appeal to a broader market than traditional tax-exempt bonds because a wider range of investors, such as pension funds that pay no taxes and therefore have less incentive to invest in tax-exempt bonds, are able to take advantage of them and receive a return comparable to taxable debt instruments. Tax credit BABs provide investors with a nonrefundable tax credit of 35 percent of the net bond interest payments (excluding the credit), which represents a federal subsidy to the state or local governmental issuer equal to approximately 25 percent of the total return to the investor. This subsidy is expected to make investors indifferent between the tax credit bond and a taxed bond that is otherwise similar. As a result, each dollar of federal revenue foregone for both direct payment and tax credit BABs benefits state and local governments. One hundred percent of the proceeds from BABs must be used for capital expenditures. There is no volume limitation on the amount of eligible BABs that can be issued during this period. COBRA was established in 1985 and provides access to health insurance for individuals who lost their employer-sponsored coverage. Before the American Recovery and Reinvestment Act of 2009 (Recovery Act), individuals paid up to 102 percent of the total COBRA premium cost—the full cost plus a two percent administration fee—to retain their health coverage. The act provided up to 9 months of premium assistance at a lower rate to individuals who were involuntarily terminated from their jobs. The Department of Defense Appropriations Act, 2010 (Pub. L. No. 111–118) extended the duration of premium assistance from 9 months to 15 months. Individuals pay no more than 35 percent of premium costs and their former employers pay the remaining 65 percent. Employers are reimbursed for their COBRA subsidies through a tax credit against their payroll tax liability or through a tax refund if the credit exceeds their payroll tax liability. Employers file Form 941, “Employer’s Quarterly Federal Tax Return,” for a COBRA credit. In some instances, such as for state health plans that are subject to COBRA requirements, multiemployer group health plans or insurers, instead of the former employer, may provide the COBRA subsidy and file for a COBRA credit. To be eligible for COBRA premium assistance, individuals must (1) be involuntarily terminated between September 1, 2008, and December 31, 2009 (recently extended to February 28, 2010), (2) not be eligible for another group health plan, such as Medicare or a group plan offered through a spouse’s employer, and (3) have a modified adjusted income below $145,000, or $290,000 if married and filing a joint tax return. The FTHBC initially was established by the Housing and Economic Recovery Act of 2008 as a tax credit equal to 10 percent of the purchase price of the principal residence, up to $7,500, which took the form of an interest-free loan that must be paid back in $500 increments over 15 years. The Recovery Act increased the maximum credit for the 2009 FTHBC to $8,000, with no payback required unless the home ceases to be the taxpayer’s principal residence within 3 years. This $8,000 credit is a refundable tax credit, meaning that it is paid out even if there is no tax liability. The 2009 FTHBC was enacted into law on February 17, 2009, but eligibility was made retroactive to be applied to homes purchased between January 1, 2009, and November 30, 2009. The Worker, Homeownership, and Business Assistance Act of 2009 extended the FTHBC to home purchases made through April 30, 2010, as well as those that are under a binding contract on that date if the contract provides for closing the sale on or before June 30, 2010. The act also authorized a credit of up to $6,500 for individuals who owned and used the same residence as their principal residence for any 5 consecutive years during the 8-year period ending when they bought another property to use as their principal residence. The 2008 and 2009 FTHBC, as well as the 2010 credit, have complex requirements. Regarding the amount of the credit, taxpayers buying their first home can claim the smaller of $7,500 for the 2008 credit and $8,000 for the 2009 and 2010 credits, or 10 percent of the purchase price of the home. Virtually all eligibility requirements for the 2008 and 2009 FTHBC are identical, as noted in table 4. However, there are differences—the primary one being the purchase date. The 2010 FTHBC contains several new requirements. The MWPC provides up to $400 for working individuals and $800 for working married couples. Taxpayers may receive the credit throughout the year in the form of lower amounts of tax withheld from their paychecks. The MWPC is completely phased out for single taxpayers and for married taxpayers filing jointly with modified adjusted gross incomes (MAGI) in excess of $95,000 and $190,000, respectively. Taxpayers must have a Social Security number in order to claim the credit, and nonresident aliens and dependents cannot claim it. If a taxpayer received a $250 economic recovery payment or a $250 government retiree credit, the MWPC is reduced by that amount. Under the Recovery Act, individuals receiving Social Security and certain other benefits were to receive a onetime payment of $250, as were certain government retirees. The NOL carryback provision is available to eligible small businesses— those that had a 3-year gross receipts average of no more than $15 million—if their costs and deductions exceeded their income in 2008. It allowed these businesses to apply for a tax refund for taxes paid in up to 5 previous years. The refund is the difference between previous taxes paid and the taxes that would have been paid if the amount of the 2008 loss were deducted from past profits. The Recovery Act increased the small- business gross-receipts limit from $5 million to $15 million and extended the NOL carryback period for 2008 NOLs from 2 years to up to 5 years. The HCTC helps workers pay for health insurance by subsidizing part of their health insurance premium when they are between the ages of 55 and 64 and are receiving payments from the Pension Benefit Guaranty Corporation (PBGC) or they are eligible for Trade Adjustment Assistance (TAA) benefits because they lost their jobs due to international trade. Other eligibility requirements for individuals include that they not be entitled to benefits from a government health insurance program and that they be enrolled in a qualified health plan. IRS administers the HCTC program but relies on the Department of Labor and PBGC to identify that workers are potentially eligible for the credit. The HCTC can be claimed on a yearly basis on an individual’s tax return or taxpayers can choose to have advance payments sent to their health plans on a monthly basis as health insurance premiums are due. The Recovery Act made several changes to the HCTC, effective until December 31, 2010, including the following. First, it increased the health insurance premium subsidy rate from 65 percent of premiums to 80 percent. Second, it allowed taxpayers to be reimbursed for payments they made to their health plans when they were eligible for, but not yet enrolled in, the HCTC program. Third, it allowed family members of HCTC recipients to continue to receive coverage for up to 2 years if the qualified taxpayer becomes eligible for Medicare, the taxpayer and the spouse divorce, or the taxpayer dies. Fourth, it added a new qualified health insurance plan—one funded by a voluntary employees’ beneficiary association. Other changes that do not expire but will need to be reauthorized in 2010 include broadening eligibility for the TAA program to include service sector and public agency workers and requiring the Department of the Treasury to conduct a biennial survey of HCTC-eligible individuals. Our objectives were to (1) describe the status of the Internal Revenue Service’s (IRS) implementation of American Recovery and Reinvestment Act of 2009 (Recovery Act) tax provisions; (2) analyze IRS plans to collect data on the provisions, examine whether and how IRS captured data on the use of selected provisions, and discuss the provisions’ overall effect; (3) assess IRS’s efforts to determine potential abuse of the provisions and IRS’s steps for minimizing it; and (4) discuss possible lessons learned for future tax administration. To address the first objective, we identified from IRS documents implementation steps taken and planned for each of the 54 Recovery Act provisions that IRS had a role in administering. We focused on education and outreach, guidance and instruction, and processing and programming activities because they were part of a framework that IRS used to implement Recovery Act tax provisions. To further address this objective as well as others, we selected five Recovery Act tax provisions to review in detail—Build America Bonds (BAB), Consolidated Omnibus Budget Reconciliation Act (COBRA) premium subsidies, the First-Time Homebuyer Credit (FTHBC), the Making Work Pay Credit (MWPC), and net operating loss (NOL) carrybacks. We also reviewed the Health Coverage Tax Credit (HCTC), but our analysis of the HCTC was limited to data-collection issues because we are doing a separate review of the HCTC, which is due in March 2010, as mandated by the Recovery Act. We chose the five provisions to review in detail using the following four criteria: Year of implementation. By choosing provisions being implemented in 2009, we could study how IRS’s forms, guidance, systems, and processes were used to implement change relatively quickly. In addition, more reliable data would be available than for provisions that could not be claimed until the future. Estimated revenue loss. We chose estimated revenue loss as another criterion to make sure we included provisions estimated to have a large effect on revenue. We were interested in whether a provision’s estimated revenue loss was among the largest of the 54 provisions. The six provisions we selected had total estimated revenue losses of about $153 billion over the period from fiscal year 2009 through fiscal year 2011, almost half the estimated losses for all 54 provisions. Refundable components. We chose refundability as a third criterion because IRS has frequently noted that refundable tax provisions are more susceptible to abuse than other tax provisions. Further, when IRS actually refunds money to taxpayers, recouping it can be difficult if the monies were paid erroneously. Coverage. As shown earlier in table 1, IRS grouped Recovery Act provisions into the following categories: individual tax credits, tax incentives for business, renewable energy, various bond incentives, health coverage improvement, and COBRA. We chose coverage as a fourth criterion in order to ensure that we considered at least one provision from most of the IRS categories. We did not select provisions from the Renewable Energy group because none of them was being implemented in 2009. We selected the BAB and NOL carryback provisions in spite of their relatively small dollar estimates to achieve wider coverage of IRS categories. Table 5 summarizes how the five provisions we selected for further study addressed these criteria and also shows how the HCTC relates to the criteria. For the second objective, dealing with IRS’s collection of data on the Recovery Act provisions, we analyzed IRS planning documents for collecting data for the 54 provisions. For the 5 selected provisions and the HCTC, we analyzed whether IRS would be able to identify provision users and the extent of use. To see how these data could relate to estimating IRS’s Recovery Act provisions’ effect on the overall economy, we consulted with GAO economics experts, including the Chief Economist’s office, and studied Council of Economic Advisers (CEA) and Congressional Budget Office (CBO) reports. To meet the third objective, relating to the potential abuse of provisions, we determined what risk assessments and risk-mitigation plans IRS had done or planned for the future and discussed them with IRS and Department of the Treasury officials. We also assessed IRS’s risk- management efforts against GAO and other published criteria on mitigating abusive noncompliance. For the five selected provisions, we examined the potential for abuse by reviewing IRS documentation and risk assessments and interviewing Treasury Inspector General for Tax Administration (TIGTA) and IRS officials. We used the results of our work and TIGTA’s to answer our fourth objective—discussing possible lessons learned for future tax administration. We interviewed responsible IRS officials to obtain their views on these observations. We found the IRS data we used reliable for the purposes of this report. We determined this after interviewing IRS and, where appropriate, TIGTA officials, and reviewing various TIGTA reports. We conducted this performance audit from June 2009 through February 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 6 details information about the 54 American Recovery and Reinvestment Act of 2009 (Recovery Act) provisions that the Internal Revenue Service (IRS) has a role in administering. The Internal Revenue Service (IRS) collected and internally circulated data on the use of three of the five provisions we focused on in our review, plus the Health Coverage Tax Credit (HCTC). Treasury released limited data to the public, but data on the individual provisions’ use was not posted on recovery.gov, the administration’s official Web site for monitoring the American Recovery and Reinvestment Act of 2009 (Recovery Act). As shown in tables 7 through 10, respectively, and in figure 3, these provisions were (1) Build America Bonds (BAB), (2) Consolidated Omnibus Budget Reconciliation Act (COBRA) premium subsidies, (3) the First-Time Homebuyer Credit (FTHBC), and (4) the HCTC. IRS has plans to collect data on other provisions in the future, as noted in table 2. For example, IRS plans to report data on the Making Work Pay Credit (MWPC) in April 2010. In addition to the contact named above, Libby Mixon, Assistant Director; Amy R. Bowser; Gerardine Brennan; Sherwin D. Chapman; Andrea S. Clark; William J. Cordrey; Mary C. Coyle; John E. Dicken; Rachel E. Dunsmoor; Shirley A. Jones; Lawrence M. Korb; Susan E. Offutt; John G. Smale, Jr.; Steven J. Sebastian; and Anne O. Stevens made key contributions to this report.
The American Recovery and Reinvestment Act of 2009 (Recovery Act), was enacted to bolster the struggling U.S. economy at an estimated cost of $787 billion, of which more than a third was in the form of tax relief to the public. This report (1) describes the status of the Internal Revenue Service's (IRS) implementation of Recovery Act tax provisions; (2) examines whether IRS captured or planned to capture data on the use of the provisions; (3) assesses IRS's efforts to determine potential abuse of the provisions; and (4) discusses possible lessons learned for future tax administration. GAO analyzed IRS's implementation and data-collection plans for each provision; reviewed IRS and Department of the Treasury (Treasury) risk-management documents; interviewed federal and industry officials; and focused on five provisions implemented in 2009: Build America Bonds (BAB), Consolidated Omnibus Budget Reconciliation Act (COBRA), First-Time Homebuyer Credit (FTHBC), Making Work Pay Credit, and Net Operating Loss carrybacks. The Recovery Act posed significant implementation challenges for IRS because it had more than 50 provisions, many of which were immediately or retroactively available and had to be implemented during the tax filing season--IRS's busiest time. Some provisions affected the 2009 filing season (2008 tax year), while others mainly will affect the 2010 and 2011 filing seasons. IRS responded quickly to its challenges. IRS went beyond its typical data-collection efforts and plans to collect some data to track many Recovery Act provisions. Specifically, IRS currently has detailed data-collection plans for 17 or about 31 percent of the provisions and 63 percent of the total estimated cost of the tax provisions. Initial collections did not fully or accurately capture the use of some provisions. In addition, very little of the data that IRS has collected on the tax provisions has been released publicly. Similar to what GAO has found about the act's spending projects, the tax provisions' economic stimulus effect cannot be precisely isolated. Economists use evidence from macroeconomic forecasting models and models that extrapolate from historical data to assess stimulus effects. These approaches, however, are imprecise because historical experience may not apply well given the magnitude of the Recovery Act. The effect of some provisions on specific aspects of the economy may be described in general terms. For example, the Council of Economic Advisers noted that in addition to other policy actions affecting residential real estate, the FTHBC may have moderated construction-industry job losses. As a result of IRS's FTHBC prerefund compliance reviews, as of February 1, 2010, IRS had frozen about 140,000 refunds pending civil or criminal examination, and, as of December 2, 2009, had identified 175 criminal schemes and had 123 criminal investigations open. Although IRS addressed some challenges with the FTHBC in these ways, it still needs to finalize a way to identify individuals who fail to report home sales and might be required to repay part of the credit because their homes ceased to be their principal places of residence within 3 years of purchase. A form already exists that could be used for this purpose--Form 1099-S, "Proceeds from Real Estate Transactions," but it is not clear IRS could use the form for this purpose under current legislative authority. As GAO's review ended, IRS identified third-party data that it expected to use and then evaluate the results. Issues IRS encountered in its Recovery Act experience could provide useful guidance for the future. Officials intend to do a lessons-learned study after the 2010 filing season but have yet to develop plans for doing so.
Treatment options for individuals with ESRD include kidney transplants or maintenance dialysis. Kidney transplants are not a practical option on a wide scale, as suitable donated organs are scarce. Consequently, dialysis is the treatment used by most individuals with ESRD. Hemodialysis, the most common form of dialysis, is generally administered three times a week at facilities that provide these services. During hemodialysis, a machine pumps blood through an artificial kidney and returns the cleansed blood to the body. Other dialysis options include receiving hemodialysis at home and peritoneal dialysis. To have been eligible to receive the LVPA in 2011, a facility must have established that it met the CMS regulatory criteria that, during each of the 3 previous years, it (1) provided fewer than 4,000 total dialysis treatments and (2) had not opened, closed, or changed ownership. CMS guidance provided additional detail on the application of these criteria. To establish eligibility, a facility must provide an attestation statement to its designated Medicare contractor, which is responsible for verifying that the facility has met the eligibility criteria. Only after the facility has submitted its attestation and its designated Medicare contractor has verified that the facility meets the eligibility criteria will a facility begin to receive the LVPA for its Medicare-covered dialysis treatments provided to adult beneficiaries. CMS requires facilities to provide an attestation because some of the information the Medicare contractors need to assess a facility’s eligibility—in particular, facilities’ cost reports for the year preceding the payment year—may be unavailable to the Medicare contractors until several months after the payment year begins. In cases where the Medicare contractors could not make a final eligibility determination at the beginning of the payment year, they were to conditionally approve LVPA payments; then, once the necessary information becomes available, the Medicare contractors are required to reassess the facility’s eligibility for the LVPA. If a Medicare contractor determines that a facility that received the LVPA was actually ineligible, the contractor is expected to recoup all LVPA payments made to that facility within 6 months of that determination. Many of the 326 facilities eligible for the 2011 LVPA were located near other facilities, indicating that they might not have been necessary for ensuring access to care. Certain facilities with relatively low volume that were not eligible for the LVPA had above-average costs and appeared to have been necessary for ensuring access to care. Moreover, the design of the LVPA provides facilities with an adverse incentive to restrict their service provision to avoid reaching the 4,000 treatment threshold. Many LVPA-eligible facilities in 2011 were located near other dialysis facilities, indicating that they might not have been necessary for ensuring access to care. While LVPA-eligible facilities were more isolated compared with all dialysis facilities (see fig. 1), nearly 30 percent of LVPA-eligible facilities were located within a mile of another facility, and more than 3 percent of LVPA-eligible facilities shared an address with another facility that was not eligible and was owned by the same company. In addition, more than half—approximately 54 percent—of LVPA-eligible facilities were 5 miles or less from another facility. These results indicate that the patients using many LVPA-eligible facilities may have had access to multiple facilities for receiving dialysis care, which suggests that the LVPA does not effectively target facilities that appear necessary for ensuring access to dialysis care. Many LVPA-eligible facilities were located near high-volume facilities, suggesting that these LVPA-eligible facilities may not have warranted a payment adjustment because they were located in areas with a population base sufficient to support high-volume facilities. Approximately 35 percent of the 326 LVPA-eligible facilities were located within 5 miles of a high-volume facility.94 percent were located in urban areas, compared with 51 percent of all LVPA-eligible facilities. Compared with all freestanding facilities in our cost analysis, freestanding LVPA-eligible facilities had substantially higher costs per dialysis treatment in 2011, but other freestanding facilities that provided a relatively low volume of treatments were ineligible for the LVPA, even though they were isolated and incurred above-average costs, because they exceeded the treatment threshold. The average cost per treatment for the 216 freestanding facilities that were LVPA-eligible was $272, compared with $235 for all freestanding facilities—a difference of approximately 16 percent.freestanding facilities that provided a relatively low volume of treatments, were isolated, and incurred above-average costs. For example, if the volume threshold was raised to 5,000 dialysis treatments, 203 additional freestanding facilities would have been eligible for the 2011 LVPA. Of these 203 facilities, 68 and 25 were located more than 15 miles and 25 miles, respectively, from another facility, indicating that these facilities likely were important for ensuring access to care. On average, costs per dialysis treatment for these two groups of isolated facilities exceeded the average for all freestanding facilities by approximately 9 percent each— $21 and $22, respectively. However, the 2011 LVPA did not target other The design of the LVPA also raises concerns because it provides facilities with an adverse incentive to restrict their service provision to avoid reaching the 4,000 treatment threshold. Facilities that reach this threshold lose eligibility for the next 3 calendar years. For example, for a facility that provided 3,999 total treatments in 2010 and met all other eligibility criteria for the LVPA, providing an additional treatment would have caused the facility to lose eligibility for the LVPA for the next 3 calendar years, resulting in an estimated $390,000 in lost revenue from Medicare for 2011 through 2013. CMS has implemented an adjustment in another payment system that decreases as facility volume increases—an approach which, if applied to the LVPA, could diminish the incentive for providers to limit service provision by making the loss of potential revenue smaller for supplying additional services. In addition, such an adjustment—referred to as a tiered or phased-out adjustment—could more closely align the LVPA with the decline in cost per treatment that occurs as volume increases. For example, among freestanding facilities that met the other LVPA eligibility criteria, the average cost per treatment for facilities that would have qualified under a 3,000 treatment threshold was $290; the average cost per treatment for facilities that would have failed a 3,000 treatment threshold but qualified under a 4,000 treatment threshold was $263, and the average cost per treatment for facilities that would have failed a 4,000 treatment threshold but qualified under a 5,000 treatment threshold was $256. Under the adjustment, hospitals received increased payments per discharge if the hospital provided fewer than 1,600 discharges a year. The amount of the adjustment was as much as 25 percent for hospitals with fewer than 200 discharges and decreased linearly until it was phased out for hospitals with 1,600 or more discharges. Medicare overpaid an estimated $5.3 million for the LVPA to dialysis facilities that did not meet the eligibility requirements established by CMS and did not pay about $6.7 million to facilities eligible for the LVPA. The guidance that CMS issued for implementation of the regulatory requirements was sometimes unclear and not always available when needed, and the misunderstanding of LVPA eligibility likely was exacerbated because CMS conducted limited monitoring of the Medicare contractors’ administration of LVPA payments. Medicare overpaid 83 dialysis facilities that were ineligible for the LVPA in 2011 by an estimated $5.3 million, which was nearly one-quarter of the approximately $22.7 million in LVPA payments made that year. (See fig. 2.) These facilities were ineligible because, on the basis of applicable data and methods specified by CMS in its guidance (and clarified through our interviews with CMS), they did not meet the regulatory requirements of having (1) provided under 4,000 dialysis treatments in each of the 3 years preceding the payment year and (2) not opened, closed, or had a change of ownership in the 3 years preceding the payment year. Medicare contractors are expected to recoup payments made in error within 6 months of detecting the error, but as of January 2013, CMS did not know whether any of these overpayments had been recouped. These overpayments were of two types: payments to dialysis facilities that were ineligible at the beginning of 2011 and payments to facilities that were potentially eligible given the data available at the beginning of 2011 but proved ineligible when all pertinent information became Most of these payments—about $4.8 million—were to available.73 facilities that were clearly ineligible at the beginning of the year because data available prior to 2011 showed that the facility did not meet one or more of the eligibility criteria. The remaining $0.5 million was paid to 10 facilities that met the eligibility criteria for 2008 and 2009, but when data on 2010 activity became available were shown not to have met the eligibility criteria for that year. In cases where Medicare contractors could not make a final eligibility determination at the beginning of the payment year, they were to conditionally approve LVPA payments, reassess eligibility when facilities’ 2010 data became available, and, for facilities determined to be ineligible, recoup payments within 6 months of determining ineligibility. Furthermore, many eligible dialysis facilities did not receive any LVPA payments in 2011 and others received payments for only part of the year. These nonpayments amounted to about $6.7 million and affected 273 facilities. Seventy-nine eligible facilities did not receive payments for any treatments; these payments would have amounted to about $3.4 million. It is probable that these facilities did not claim the LVPA—that is, they did not attest to their eligibility, as required in regulations—although it is also possible that some attested to their eligibility and were incorrectly denied by their Medicare contractor. Another 194 facilities received some but not all payments for which they were eligible—in total accounting for another estimated $3.3 million in nonpayments. Thirty-two facilities did not start receiving LVPA payments until part way through the year, but then consistently received payments for all treatments for the remainder of the year. These are likely facilities that were late in attesting to their eligibility, resulting in LVPA nonpayments of about $0.9 million. For the remaining estimated $2.4 million in nonpayments to 162 facilities, the reason is less clear because there is no discernible pattern. For example, some facilities received payments for several months, did not receive payments for 1 or more subsequent months, and then started receiving payments again. Other facilities received payments for some but not all treatments within a given month for multiple months in a row. We cannot explain the cause of these payment inconsistencies, but the inconsistencies could suggest problems with the claims payment system. Many of the overpayments to ineligible facilities showed a similar lack of pattern, which also could suggest problems with the claims payment system. In 2011, CMS correctly paid the LVPA to 249 facilities for at least some of their treatments; these payments totaled an estimated $17.4 million. Fifty-five of these facilities received about $4.2 million in payments for all their treatments. CMS paid the remaining estimated $13.2 million to the 194 eligible facilities that received the LVPA for only some treatments. Although CMS provided opportunities for Medicare contractors and facilities to ask clarifying questions regarding its implementation of the requirements for LVPA eligibility, the guidance for implementing these requirements was sometimes unclear and not always available when needed. For example, the majority of the facilities that incorrectly received the LVPA—54 of 83—were hospital-affiliated facilities that failed the volume threshold. Eventually, CMS specified that Medicare contractors should combine the treatments of a hospital’s affiliated dialysis facilities in determining whether the LVPA requirement for fewer than 4,000 treatments had been met. While this method of applying the regulatory requirement is logical because hospital-affiliated facilities do not file individual cost reports, CMS did not issue explicit guidance on this topic until July 2012. CMS guidance to Medicare contractors for determining whether a facility had met the LVPA requirement of not having opened, closed, or changed ownership during the 3 years preceding the payment year was neither clear nor timely. According to our interviews with CMS officials, the agency’s intention was for contractors to verify that each of the facility’s cost reports for the previous 3 years covered exactly 12 months; however the connection between the regulatory requirement and the duration of the cost-reporting periods has not been explicitly made in CMS guidance. (For example, an October 2010 internal technical direction letter stated that, in order to meet the LVPA eligibility verification requirements, Medicare contractors needed to confirm that fewer than 4,000 total treatments were provided for each of the 12-month cost-reporting periods—however, it was not clear which regulatory verification requirement(s) the sentence was implementing.) CMS officials explained that three cost-reporting periods of exactly 12 months (which need to be consecutive) are a time frame that would exhibit business practice patterns that demonstrate a facility is consistently low-volume and that, because cost reports correspond with the facility’s fiscal year, using them provides a snapshot of a facility’s financial ability to incur costs for furnishing renal dialysis. Furthermore, CMS officials noted that having three cost-reporting periods of exactly 12 months each was a reasonable method of assessing the regulatory requirement because, generally, if a facility opened, closed, or had a change in ownership (in which case the facility may receive a new provider number), this would cause a break in the cost-reporting period and thus lead a facility to have one or more cost reports that spanned fewer than 12 months. In July 2011, CMS issued public guidance clarifying that the relevant periods during which a facility had not opened, closed, or had a change in ownership were the three cost-reporting periods before the payment year and not the 3 calendar years before the payment year.helped clarify that the 12-month rule was sufficient for assessing that a facility had not opened, closed, or had a change in ownership, as of January 2013, no CMS guidance has been explicit on this topic, and no guidance has stated that each cost-reporting period must be exactly 12 months. Unclear and late guidance for determining whether a facility had opened, closed, or changed ownership led to misunderstanding about which facilities were eligible, and at least some of the misunderstanding persisted as of September 2012. For example, when we questioned a representative of a large dialysis organization in September 2012 about some of the organization’s facilities that we found to have not received the 2011 LVPA despite being eligible, the representative still believed those facilities were ineligible because they had a change in ownership during the previous 3 calendar years. In fact, these facilities were still eligible for the 2011 LVPA because the change in ownership occurred after the end of the facilities’ 2010 cost-reporting period and they therefore had 2008, 2009, and 2010 cost reports that each covered exactly 12 months. In addition, when we questioned the representative about some of the organization’s facilities that we found to have incorrectly received the 2011 LVPA, the representative still believed those facilities met all the regulatory requirements and therefore had been eligible. However, because these facilities opened in December 2007 and complied with CMS’s general requirement that cost reports not span less than a month, they had a 2008 cost report that spanned slightly more than 12 months. This made these facilities ineligible for the 2011 LVPA. When we shared this example with CMS, a CMS official stated that the agency had not considered the possibility that a facility could have a cost report spanning more than 12 months. Additionally, when we questioned a representative from a different large dialysis organization in September 2012 about some of that organization’s facilities that we found to have incorrectly received the 2011 LVPA, the representative still believed that those facilities had been eligible because the changes in ownership for those facilities did not result in new provider numbers. However, these facilities were ineligible because the changes in ownership caused a break in the cost-reporting period and thus the facilities had at least one cost report that spanned fewer than 12 months. While CMS has continued to issue clarifying guidance and provide Medicare contractors and facilities with opportunities to ask clarifying questions, evidence shows that CMS’s guidance for determining LVPA eligibility was not fully and correctly implemented. In particular, none of the estimated $5.3 million in 2011 overpayments had been recouped by June 2012, based on an analysis of claims, and CMS was not aware of any overpayments that had been recouped by January 2013. This suggests that many Medicare contractors either had not yet discovered the payments made in error or were not aware of their obligation to reassess facilities’ eligibility once the cost report for the previous year became available and to recoup overpayments within 6 months of discovery. Another possible reason for overpayment or nonpayment of the LVPA is that some of the guidance was sent only to Medicare contractors and was not publicly available. Medicare contractors are responsible for ensuring that facilities receive any required information based on this guidance, and that function is particularly important for the LVPA because in order to receive the LVPA a facility must first attest to its eligibility, which it will do only if it believes it is eligible. We do not know the extent to which continued misunderstanding of LVPA eligibility stems from Medicare contractors’ failure to share the relevant portions of this nonpublic guidance or from facilities’ not understanding the guidance that they received. Much of the misunderstanding and resulting payment problems related to eligibility were exacerbated by CMS’s limited monitoring of the Medicare contractors and its consequent limited knowledge about implementation of the LVPA. While CMS requested information about the 2011 LVPA from Medicare contractors in October 2011 and again in July 2012, as of January 2013, it had not yet verified whether the information it received was complete or in a usable form. In particular, CMS still did not know which facilities were eligible for the 2011 LVPA, which facilities had attested to being eligible for the adjustment, nor which facilities received the 2011 LVPA. CMS intended the LVPA to encourage small ESRD facilities to continue operating in areas where beneficiary access might be jeopardized if such facilities closed. However, as designed, the LVPA does not effectively achieve this goal because it does not target all relatively low-volume, high-cost facilities that are in areas where beneficiaries may lack other dialysis care options, and it targets some facilities that appeared unnecessary for ensuring access to dialysis, such as dialysis facilities located in close proximity to other facilities. In addition, facilities currently face a large loss in potential revenue if they reach the LVPA treatment threshold. This creates an adverse incentive for facilities to restrict their service provision to avoid reaching the treatment threshold. In addition to these concerns about more appropriately targeting the LVPA, we also found significant issues associated with its implementation, including frequent LVPA overpayments. These overpayments primarily stemmed from unclear and untimely CMS guidance and persisted because of CMS’s insufficient monitoring of Medicare contractors. Without clear, timely guidance and stronger monitoring of Medicare contractors’ implementation of the guidance, Medicare may continue to pay facilities that are not eligible for the LVPA and to not pay many facilities that are eligible. Although the amount of money involved was small—overpayments and nonpayments totaling about $12 million in 2011 for the $10.1 billion ESRD program—payment problems with the adjustment undermined its purpose, which is to encourage small ESRD facilities to continue operating in areas where beneficiary access might be jeopardized without them. To more effectively target facilities necessary for ensuring access to care, we recommend that the Administrator of CMS consider restricting the LVPA to low-volume facilities that are isolated. To reduce the incentive for facilities to restrict their service provision to avoid reaching the LVPA treatment threshold, we recommend that the Administrator of CMS consider revisions such as changing the LVPA to a tiered adjustment. To ensure that future LVPA payments are made only to eligible facilities and to rectify past overpayments, we recommend that the Administrator of CMS take the following four actions: require Medicare contractors to promptly recoup 2011 LVPA payments that were made in error; investigate any errors that contributed to eligible facilities not consistently receiving the 2011 LVPA and ensure that such errors are corrected; take steps to ensure that CMS regulations and guidance regarding the LVPA are clear, timely, and effectively disseminated to both dialysis facilities and Medicare contractors; and improve the timeliness and efficacy of CMS’s monitoring regarding the extent to which Medicare contractors are determining LVPA eligibility correctly and promptly redetermining eligibility when all necessary data become available. We received written comments on a draft of this report from HHS, which are reprinted in appendix I. HHS agreed with our recommendations and stated it would explore refinements to the design of the LVPA and take actions to improve its implementation. HHS also provided technical comments, which we incorporated as appropriate. With regard to our recommendation that CMS consider restricting the LVPA to low-volume facilities that are isolated, HHS stated that CMS will explore potential refinements. HHS stated that other factors, in addition to geographic isolation, may contribute to an ESRD facility being low-volume and that the department had studied the costs of both rural and nonrural facilities and decided not to implement an adjustment on the basis of rural location. We did not analyze all the reasons facilities were low-volume, nor did we recommend a payment adjustment for rural facilities. However, we believe that providing increased payments to facilities in close proximity to one another may not be warranted. We also note that, while facilities certified on or after January 1, 2011, that apply for the LVPA must combine all of the treatments provided by facilities under common ownership within 25 miles to determine eligibility, this restriction does not ensure that only isolated facilities receive the LVPA. For example, two facilities not under common ownership could be located in close proximity and still receive the LVPA. In response to our recommendation to consider revisions such as changing the LVPA to a tiered adjustment, HHS stated that it would explore whether refinements to the LVPA are necessary. HHS stated that the incentive for facilities to limit dialysis services would exist regardless of where the decrease in payment occurred. We agree that such a change would not eliminate the incentive to limit dialysis services, but we believe it would reduce the incentive. HHS concurred with our recommendation about ensuring proper payments and rectifying past overpayments. HHS also listed specific actions it plans to take to implement the recommendation, including using multiple methods to communicate with Medicare contractors and ESRD facilities to deliver clear and timely guidance. We invited two organizations to provide oral comments on our draft report: the Kidney Care Council (KCC), which represents dialysis facility companies, and the National Renal Administrators Association (NRAA), which represents independent dialysis facilities. Representatives from these organizations expressed their appreciation for the opportunity to review the draft. Both KCC and NRAA noted that facilities within close proximity to another facility may still be necessary for ensuring access to dialysis care (for example, if the other facility is operating at capacity) or access to choice of dialysis modality (for example, if the other facility does not offer the same dialysis options). While these situations may occur, if CMS determines a single larger facility could provide appropriate services where two or more smaller facilities exist now, paying the LVPA to the existing smaller facilities may not be warranted. NRAA agreed with our finding that CMS guidance on LVPA eligibility was unclear and not transparent to facilities. Additionally, NRAA noted that CMS’s guidance requiring that hospitals—but not large dialysis organizations—sum the treatments across all of their affiliated facilities when determining eligibility for the LVPA was inconsistent. We agree there is some inconsistency; however, it will be somewhat reduced starting in 2014 as CMS requires that facilities certified after January 1, 2011, sum their treatments across all facilities that are under common ownership and within 25 miles. NRAA also disagreed with the statement that hospital cost reports are the only source of information on total treatments provided by hospital-affiliated facilities. As we note in the report, CMS officials told us that cost reports are the only source of total treatments. Both KCC and NRAA requested more details on GAO’s recommendations related to improving the design of the LVPA. Our recommendations—that CMS should (1) more effectively target facilities necessary for ensuring access to care by considering restricting the LVPA to low-volume facilities that are isolated and (2) reduce the incentive for facilities to restrict their service provision by considering revisions such as changing the LVPA to a tiered adjustment—outlined the factors CMS should consider in improving the LVPA. We did not specify details of the design because we believe CMS should have flexibility in how to more effectively target facilities necessary for ensuring access to care and reduce their incentive to restrict service provision. KCC urged GAO to recommend that CMS pay out LVPA payments that CMS failed to make. We believe facilities are best positioned to determine and pursue their own rights to appeal Medicare claims determinations. Technical comments from KCC and NRAA were incorporated in the draft as appropriate. We are sending copies of this report to the Secretary of Health and Human Services. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. In addition to the individual named above, Phyllis Thorburn, Assistant Director; Todd D. Anderson; Alison Binkowski; William Black; George Bogart; Elizabeth T. Morrison; Brian O’Donnell; and Jennifer Whitworth made key contributions to this report.
Medicare spent about $10.1 billion in 2011 on dialysis treatments and related items and services for about 365,000 beneficiaries with end-stage renal disease (ESRD). Most individuals with ESRD are eligible for Medicare. As required by the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA), CMS implemented the LVPA to compensate dialysis facilities that provided a low volume of dialysis treatments for the higher costs they incurred. MIPPA required GAO to study the LVPA; GAO examined (1) the extent to which the LVPA targeted low-volume, high-cost facilities that appeared necessary for ensuring access to care and (2) CMS's implementation of the LVPA, including the extent to which CMS paid the 2011 LVPA to facilities eligible to receive it. To do this work, GAO reviewed Medicare claims, facilities' annual reports of their costs, and data on dialysis facilities' location to identify and compare facilities that were eligible for the LVPA with those that received it. The low-volume payment adjustment (LVPA) did not effectively target low-volume facilities that had high costs and appeared necessary for ensuring access to care. Nearly 30 percent of LVPA-eligible facilities were located within 1 mile of another facility in 2011, and about 54 percent were within 5 miles, indicating these facilities might not have been necessary for ensuring access to care. Furthermore, in many cases, LVPA-eligible facilities were located near high-volume facilities. Among the freestanding facilities in GAO's analysis, LVPA-eligible facilities had substantially higher costs per dialysis treatment than the average facility ($272 compared with $235); however, so did other facilities that provided a relatively low volume of treatments (and were isolated) but were ineligible for the LVPA. The design of the LVPA gives facilities an adverse incentive to restrict service provision because facilities could lose a substantial amount of Medicare revenue over 3 years if they reach the treatment threshold. In another payment system, the Centers for Medicare & Medicaid Services (CMS) implemented a tiered adjustment that decreases as facility volume increases. Such an adjustment could diminish the incentive for dialysis facilities to limit service provision and also more closely align the LVPA with the decline in costs per treatment that occurs as volume increases. Medicare overpaid an estimated $5.3 million in 2011 to dialysis facilities that were ineligible for the LVPA and did not pay an estimated $6.7 million that same year to facilities that were eligible. The payment problems occurred primarily because the guidance issued by CMS on facility eligibility was sometimes not clear or timely and CMS's monitoring of the LVPA was limited. For example, the majority of the ineligible facilities that received the LVPA were hospital-affiliated facilities that failed the volume requirement. Although CMS gave the Medicare contractors guidance for determining how to count treatments when facilities are affiliated with hospitals, the agency did not issue that guidance until July 2012. CMS has conducted limited monitoring of the LVPA, which has left CMS with incomplete information about LVPA administration and payments. For example, CMS was unaware as of January 2013 whether its contractors had recouped erroneous 2011 LVPA payments. In addition, CMS had requested information from its contractors about the implementation of the 2011 LVPA, such as which facilities were eligible for or had received the LVPA, but had not yet verified whether the information it received was complete or in a usable form. Without complete information about the administration of this payment adjustment, CMS is not in a position to ensure that the LVPA is reaching low-volume facilities as intended or that erroneous payments to ineligible facilities are recouped. To more effectively target the LVPA and ensure LVPA payment accuracy, GAO recommends that the Administrator of CMS consider restricting payments to low-volume facilities that are isolated; consider changing the LVPA to a tiered adjustment; recoup 2011 LVPA payments that the Medicare contractors made in error; improve monitoring of those contractors; and improve the clarity and timeliness of guidance. The Department of Health and Human Services, which oversees CMS, agreed with GAO's recommendations.
According to the Bureau, no accurate estimate exists of the total number of Americans living abroad. The Constitution and federal law give the Bureau discretion to decide whether to count American citizens living abroad. In prior censuses, the Bureau has generally included “federally affiliated” groups—members of the military and federal employees and their dependents—but has excluded private citizens residing abroad from all but the 1960 and 1970 Censuses. The 2000 Census, using administrative records, found 576,367 federally affiliated Americans residing overseas, including 226,363 military personnel, 30,576 civilian employees, and 319,428 dependents of both groups. In response to congressional direction and the concerns of various private organizations, the Census Bureau launched a research and evaluation program to assess the practicality of counting both private and federally affiliated U.S. citizens residing abroad. The key part of this effort, the enumeration, took place from February 2004 through July 2, 2004, in France, Kuwait, and Mexico. To promote the overseas census test the Bureau relied on third parties—American organizations and businesses in the three countries—to communicate to their members and/or customers that an overseas enumeration of Americans was taking place and to make available to U.S. citizens either the paper questionnaire or Web site address. Currently, the Bureau is processing and analyzing the overseas questionnaire data and plans to complete an evaluation of the test results in early 2005. The Bureau estimates that it will have spent approximately $7.8 million in fiscal years 2003 through 2005 to plan, conduct, and evaluate the 2004 test. The Bureau has requested additional funds for fiscal year 2005 to plan for further testing scheduled for 2006. The Bureau also plans to include overseas testing in the 2008 dress rehearsal if it were to receive the necessary funding. In May 2004 we reported on the design of the overseas enumeration test and concluded that because of various methodological limitations, the test results will only partially answer the Bureau’s key research objectives concerning feasibility (as measured by such indicators as participation and number of valid returns), data quality, and cost. Further, we noted that the Bureau should not decide on its own whether or not to enumerate Americans overseas, and in fact would need congressional guidance on how to proceed. As shown in figure 1, the key decisions facing Congress in this regard include, in addition to the threshold question of whether American residing overseas should be counted, how the data should be used and whether to enumerate this population group as part of the decennial census. We also recommended that if further testing were to occur, that the Bureau resolve the shortcomings of the design of the 2004 test and better address the objectives of an overseas enumeration. As agreed with your offices, our objectives for this report were to assess (1) whether the Bureau implemented the test consistent with its design, and (2) the initial lessons learned from the test results and their implications for future overseas enumerations. To assess the first objective, we interviewed Bureau officials and compared the Bureau's test plans with what was actually done at the three test sites. We visited Paris, France, and Guadalajara, Mexico, to obtain the views of 12 private, civic, and other organizations on the implementation of the overseas census test, and/or to confirm at 36 organizations the availability of census material. In addition, to a more limited extent, we interviewed officials from third party organizations in Kuwait via the telephone or e-mail. We judgmentally selected these organizations because they had agreed to display census promotional materials and, in some cases, had also agreed to do one or more of the following activities: make available paper copies of the census questionnaire, publish information in a newsletter, post a link to a Web site, send outreach e-mail to members, and/or create speaking opportunities to discuss the census. The results of these visits are not necessarily representative of the larger universe of third-party organizations. To assess the implications of the test results on future overseas enumerations and the 2010 census, we obtained from Bureau officials preliminary results of the overseas census by test site and response mode as well as cost data. We also interviewed officials from the Bureau and third-party organizations to determine what lessons were learned from the test and the implications on future overseas enumeration efforts. The Bureau’s design for the 2004 overseas enumeration test was generally implemented as planned and completed on schedule. The Bureau’s design had four key components: the mode of response, the questionnaire designed specifically for Americans living overseas, three test sites, and an outreach and promotion program designed to communicate and educate Americans abroad that a test census was being conducted. Table 1 describes each of these components in greater detail. However, while the test was generally implemented as designed, our earlier report pointed out several methodological limitations with the design, such as not being able to calculate response rates because the universe of Americans is unknown or not being able to measure the quality of data because of the impracticality of developing an address list. As we discuss later in this report, it is these methodological limitations that impede the Bureau’s ability to implement a successful overseas enumeration. Although the 2004 overseas enumeration test ended in early July 2004 and the Bureau has just begun evaluating the results, the response levels were poor, and very expensive to obtain on a per unit basis. The response level to the overseas enumeration suggests that the current approach to counting overseas Americans—a voluntary survey that relies heavily on marketing to get people to participate—by itself cannot secure a successful head count. Further, obtaining the additional resources needed to produce substantially better results may not be feasible, and still not yield data that are comparable in quality to the stateside enumeration. The 5,390 responses the Bureau received for this test were far below what the Bureau planned for when printing up materials and census forms. While the Bureau ordered 520,000 paper forms for three test sites, only 1,783 census forms were completed and returned. Of these, 35 were Spanish language forms that were made available in Mexico. The remaining 3,607 responses were completed via the Internet. Table 2 below shows the number of census questionnaires that the Bureau printed for each country and the number of responses they actually received in both the paper format and via the Internet. Because of the low response levels, in early April 2004, the Bureau reversed its decision to not use paid advertising and in May 2004 initiated a paid advertising campaign in France and Mexico. This included print and Internet ads in France and print and radio ads in Mexico. See figure 2 for examples of the ads used in the paid advertising campaign. A Bureau official told us the ad campaign for the 2004 overseas test cost about $206,000. This official said there were surplus funds available in the project budget to use for this purpose due to lower than expected processing and postage costs for the overseas test. While the Bureau saw some increase in the number of responses after the paid advertising campaign began, this official said the increase was slight. Not only were response levels low, they were extremely expensive to obtain on a unit basis—roughly $1,450 for each returned questionnaire, based on the $7.8 million the Bureau spent preparing for, implementing, and evaluating the 2004 overseas test. In contrast, the unit cost of the 2000 Census was about $56 per household. Although the two surveys are not directly comparable because the 2000 Census costs covered operations not used in the overseas test, the 2000 Census was still the most expensive census in our nation’s history. The main reason for the high unit cost is the low return rate. However, significantly boosting participation levels may not be feasible. The Bureau’s experience in the 2000 Census highlights the level of effort that was needed to raise public awareness about the census and get people to complete their forms. For the 2000 decennial, the Bureau spent $374 million on a comprehensive marketing, communications, and partnership effort. The campaign consisted of a five-part strategy conducted in three waves beginning in the fall of 1999 and continuing past Census Day (April 1, 2000). The effort helped secure a 72-percent return rate. Specific elements included television, radio, and other mass media advertising; promotions and special events; and a census-in-schools program. Thus, over a period of several months, the American public was on the receiving end of a steady drumbeat of advertising aimed at publicizing the census and motivating them to respond. The Bureau also filled 594 full-time partnership specialist positions. These individuals were responsible for mobilizing support for the census on a grassroots basis by working with governmental entities, private companies, and religious and social service groups, and other organizations. Replicating this level of effort on a worldwide basis would be impractical, and still would not produce a complete count. Indeed, even after the Bureau’s aggressive marketing effort in 2000, it still had to follow-up with about 42 million households that did not return their census forms. Moreover, because there are no reliable figures on the number of Americans residing overseas, the Bureau would not have a good measure of the number of people that did not participate, or the overall quality of the data. The Bureau’s experience in conducting the 2004 overseas test underscored the difficulties of administering a complex operation from thousands of miles away. Not surprisingly, as with any operation this complex, various challenges and unforeseen problems arose. While the Bureau was able to resolve them, its ability to do so should there be a full overseas enumeration as part of the 2010 Census, would be highly questionable as far more resources would be required. This was particularly evident in at least two areas: grappling with country-specific issues and overseeing the contractor responsible for raising public awareness of the census at the three test sites. The Bureau encountered a variety of implementation problems at each of the test sites. In some cases the problems were known in advance, in others, glitches developed at the last minute. Although such difficulties are to be expected given the magnitude of the Bureau’s task, a key lesson learned from the test is that there would be no economy of scale in ramping up to a full enumeration of Americans abroad. In fact, just the opposite would be true. Because of the inevitability of country-specific problems, rather than conducting a single overseas count based on a standard set of rules and procedures (as is the case with the stateside census), the Bureau might end up administering what amounts to dozens of separate censuses—one for each of the countries it enumerates—each with its own set of procedures adapted to each country’s unique requirements. The time and resources required to do this would likely be overwhelming and detract from the Bureau’s stateside efforts. For example, during the overseas test, the Bureau found that French privacy laws restrict the collection of personal data such as race and ethnic information. However, these data are collected as part of the decennial census because they are key to implementing a number of civil rights laws such as the Voting Rights Act. Addressing France’s privacy laws took a considerable amount of negotiation between the two countries, and was ultimately resolved after a formal agreement was developed. The Bureau issued and posted on its Web site an advisory that informed Americans living in France that it was not mandatory to respond to the questionnaire, the only recipient of the collected data is the Census Bureau, the data will be kept for one year, and the respondent has a right to access and correct the data collected. The Bureau was able to collect race and ethnic data—generally a prohibited practice without the respondents’ permission—only after it received special approval from a French government agency. Initially, however, it looked as if the Bureau might have to redesign the census form if it wanted to use it in France. In Kuwait, delivery of the census materials was delayed by several weeks at the beginning of the test because they were accidentally addressed to the wrong contractor. Ultimately, the U.S. Embassy stepped-in to accept the boxes so that the enumeration could proceed. In Mexico, there was some initial confusion on the part of Mexican postal workers as to whether they could accept the postage-paid envelopes that the Bureau had provided to return the paper questionnaires for processing in the United States. Because of the small number of countries involved in the test, the Bureau was able to address the various problems it encountered. Still, the Bureau’s experience indicates that it will be exceedingly difficult to identify and resolve in advance all the various laws, rules, societal factors, and a host of other potential glitches that could affect a full overseas enumeration. As noted previously, the Bureau hired a public relations firm to develop a communications strategy to inform and motivate respondents living in the test countries to complete the census. The firm’s responsibilities included identifying private companies, religious institutions, service organizations, and other entities that have contact with Americans abroad and could thus help publicize the census test. Specific activities the organizations could perform included displaying promotional materials and paper versions of the census questionnaire, publishing information in a newsletter, and posting information on their Web sites. Although the public relations firm appeared to go to great lengths to enlist the participation of these various entities—soliciting the support of hundreds of organizations in the three countries—the test revealed the difficulties of adequately overseeing a contractor operating in multiple sites overseas. For example, the public relations firm’s tracking system indicated that around 440 entities had agreed to perform one or more types of promotional activities. However, our on-site inspections of several of these organizations in Paris, France, and Guadalajara, Mexico, that had agreed to display the census materials and/or distribute the questionnaires, uncovered several glitches. Of the 36 organizations we visited that were supposed to be displaying promotional literature, we found the information was only available at 15. In those cases, as shown in figure 3, the materials were generally displayed in prominent locations, typically on a table with posters on a nearby wall. Five of these 15 organizations were also distributing the census questionnaire, but the forms were not readily accessible. However, at 21 sites we visited, we found various discrepancies between what the public relations firm indicated had occurred, and what actually took place. For example, while the firm’s tracking system indicated that questionnaires would be available at a restaurant and an English-language bookstore in Guadalajara, none were available. In fact, the owner of the bookstore told us that no one from the Census Bureau or the public relations firm had contacted her about displaying materials for the overseas test. At the University of Guadalajara, although the tracking system indicated that an official had been contacted about, and agreed to help support the census test, that official told us no one had contacted him. As a result, when boxes of census materials were delivered to his school without any explanatory information, he did not know what to do with them, and had to telephone the U.S. Consulate in Guadalajara to figure out what they were for. Likewise, in Paris, we went to several locations where the tracking system indicated that census information would be available. None was. In fact, at some of these sites, not only was there no information about the census, but there was no indication that the organization we were looking for was at the address we had from the database. The results of the overseas test point to the difficulties of overseeing the contractor’s performance. As census materials were made available at scores of locations across the three test countries, it would have been impractical for the Bureau to inspect each site. The difficulty of supervising contractors—and any field operation for that matter—would only be magnified in a global enumeration. The Bureau’s experience in counting the nation’s population for the 2000 and earlier censuses sheds light on some of the specific operations and other elements that together form the building blocks of a successful head count (see fig. 4). While performing these activities does not necessarily guarantee a cost- effective headcount, not performing them makes a quality count far less promising and puts the entire enterprise at risk. The current approach to counting overseas Americans lacks these building blocks, as most are infeasible to perform on an overseas population. Each is discussed in greater detail below. Mandatory participation: Under federal law, all persons residing in the United States regardless of citizenship status are required to respond to the decennial census. By contrast, the overseas enumeration test was conducted as a voluntary survey where participation was optional. The Bureau has found that response rates to mandatory surveys are higher than the response rates to voluntary surveys. This in turn yields more complete data and helps hold down costs. Early agreement on design: Both Congress and the Bureau need to agree on the fundamental design of an overseas census. Concurrence on the design helps ensure adequate planning, testing and funding levels. Conversely, the lack of an agreed-upon design raises the risk that basic design elements might change in the years ahead, while the opportunities to test those changes and integrate them with other operations will diminish. Under the Bureau’s current plans, after the 2006 test, the Bureau would have just one more opportunity to test its prototype for an overseas enumeration—a dress rehearsal in 2008. Any design changes after 2008 would not be tested in a real-world environment. The design of the census is driven in large part by the purposes for which the data will be used. Currently, no decisions have been made on whether the overseas data will be used for purposes of congressional apportionment, redistricting, allocating federal funds, or other applications. Some applications, such as apportionment, would require precise population counts and a very rigorous design that parallels the stateside count. Other applications, however, could get by with less precision and thus, a less stringent approach. As we noted previously, Congress will need to decide whether or not to count overseas Americans, and how the results should be used. The basis for these determinations needs to be sound research on the cost, quality of data, and logistical feasibility of the range of options for counting this population group. Possibilities include counting Americans via a separate survey, administrative records such as passport and voter registration forms, and/or records maintained by other countries such as published census records and work permits. The Bureau’s initial research has shown that each of these options has coverage, accuracy, and accessibility issues, and some might introduce systemic biases into the data. Far more extensive research would be needed to determine the feasibility of these or other potential approaches. Once Congress knows the tradeoffs of these various alternatives, it will be better positioned to provide the Bureau with the guidance it needs to go beyond research and conduct field tests of specific approaches. The Bureau can conduct the research, or it can contract it out. Indeed, the National Research Council of the National Academy of Sciences has conducted a number of studies on the decennial census, including a review of the 2000 Census and an examination of reengineering the 2010 Census. A complete and accurate address list: The cornerstone of a successful census is a quality address list. For the stateside census, the Bureau goes to great lengths to develop what is essentially an inventory of all known living quarters in the United States, including sending census workers to canvass every street in the nation to verify addresses. The Bureau uses this information to deliver questionnaires, follow up with nonrespondents, determine vacancies, and identify households the Bureau may have missed or counted more than once. Because it would be impractical to develop an accurate parallel address list for overseas Americans, these operations would be impossible and the quality of the data would suffer as a result. Ability to detect invalid returns: Ensuring the integrity of the census data requires the Bureau to have a mechanism to screen out invalid responses. Stateside, the Bureau does this by associating an identification number on the questionnaire to a specific address in the Bureau’s address list, as well as by field verification. However, the Bureau’s current approach to counting overseas Americans is unable to determine whether or not a respondent does in fact reside abroad. So long as a respondent provides certain pieces of information on the census questionnaire, it will be eligible for further processing. The Bureau is unable to confirm the point of origin for questionnaires completed on the Internet, and postmarks on a paper questionnaire only tell the location from which a form was mailed, not the place of residence of the respondent. The Bureau has acknowledged that ensuring such validity might be all but impossible for any reasonable level of effort and funding. Ability to follow up with non-respondents: Because participation in the decennial census is mandatory, the Bureau sends enumerators to those households that do not return their questionnaires. In cases where household members cannot be contacted or refuse to answer all or part of a census questionnaire, enumerators are to obtain data from neighbors, a building manager, or other nonhousehold member presumed to know about its residents. The Bureau also employs statistical techniques to impute data when it lacks complete information on a household. Thus, by the end of each decennial census, the Bureau has a fairly exhaustive count of everyone in the nation. As noted above, because the Bureau lacks an address list of overseas Americans, it is unable to follow up with nonrespondents or impute information on missing households. As a result, the Bureau will never be able to obtain a complete count of overseas Americans. Cost model for estimating needed resources: The Bureau uses a cost model and other baseline data to help it estimate the resources it needs to conduct the stateside census. Key assumptions such as response levels and workload are developed based on the Bureau’s experience in counting people decade after decade. However, the Bureau has only a handful of data points with which to gauge the resources necessary for an overseas census, and the tests it plans on conducting will only be of limited value in modeling the costs of conducting a worldwide enumeration in 2010. The lack of baseline data could cause the Bureau to over- or underestimate the staffing, budget and other requirements of an overseas count. For example, this was evident during the 2004 overseas test when the Bureau estimated it would need around 100,000 Spanish- language questionnaires for the Mexican test site. As only 35 Spanish- language questionnaires were returned, it is now clear that the Bureau could have gotten by with printing far fewer questionnaires for Mexico. However, the dilemma for the Bureau is that its experience in the 2004 overseas test cannot be used to project the number of Spanish-language questionnaires it would need for Mexico or other Spanish-speaking countries in 2010. Similar problems would apply to efforts to enumerate other countries. Targeted and aggressive marketing campaign: The key to raising public awareness of the census is an intensive outreach and promotion campaign. As noted previously, the Bureau’s marketing efforts for the 2000 Census were far-reaching, and consisted of more than 250 ads in 17 languages that were part of an effort to reach every household, including those in historically undercounted populations. Replicating this level of effort on a global scale would be both difficult and expensive, and the Bureau has no plans to do so. Field infrastructure to execute census and deal with problems: The Bureau had a vast network of 12 regional offices and 511 local census offices to implement various operations for the 2000 Census. This decentralized structure enabled the Bureau to carry out a number of activities to help ensure a more complete and accurate count, as well as deal with problems when they arose. Moreover, local census offices are an important source of intelligence on the various enumeration obstacles the Bureau faces on the ground. For example, during the 2000 Census, the Bureau called on them to identify hard-to-count population groups and other challenges, and to develop action plans to address them. The absence of a field infrastructure for an overseas census means that the Bureau would have to rely heavily on contractors to conduct the enumeration, and manage the entire enterprise from its headquarters in Suitland, Maryland. Ability to measure coverage and accuracy: Since 1980, the Bureau has measured the quality of the decennial census using statistical methods to estimate the magnitude of any errors. The Bureau reports these estimates by specific ethnic, racial, and other groups. For methodological reasons, similar estimates cannot be generated for an overseas census. As a result, the quality of the overseas count, and thus whether the numbers should be used for specific purposes, could not be accurately determined. The 2004 test of the feasibility of an overseas enumeration was an extremely valuable exercise in that it highlighted the numerous obstacles to a cost-effective count of Americans abroad as an integral part of the decennial census. Although more comprehensive results will not be available until the Bureau completes its evaluation of the test early next year, a key lesson learned is already clear: The current approach to counting this population group—a voluntary survey that largely relies on marketing to ensure a complete count—would be costly and yield poor results. The tools and resources the Bureau has on hand to enumerate overseas Americans are insufficient for overcoming the inherent obstacles to a complete count, and it is unlikely that any refinements to this basic design would produce substantially better results, and certainly not on a par suitable for purposes of congressional apportionment. What’s more, the Bureau already faces the difficult task of carrying out a cost-effective stateside enumeration in 2010. Securing a successful count of Americans in Vienna, Virginia, is challenging enough; a complete count of Americans in Vienna, Austria—and in scores of other countries around the globe—would only add to the difficulties facing the Bureau as it looks toward the next national head count. As a result, we believe that any further tests or planning activities related to counting Americans overseas as part of the decennial census would be an imprudent use of the Bureau’s limited resources. That said, to the extent that Congress desires better data on the number and characteristics of Americans abroad for various policy-making and other nonapportionment purposes that require less precision, such information does not necessarily need to be collected as part of the decennial census, and could, in fact, be acquired through a separate survey or other means. To help inform congressional decision making on this issue, including decisions on whether Americans should be counted and how the data should be used, it will be important for Congress to have the results of the Bureau’s evaluation of the 2004 overseas census test. Equally important would be information on the cost, quality of data, and logistical feasibility of counting Americans abroad using alternatives to the decennial census. Once Congress knows the tradeoffs of these various alternatives, it would be better positioned to provide the Bureau with the direction it needs so that the Bureau could then develop and test an approach that meets congressional requirements at reasonable resource levels. Given the obstacles to a cost-effective count of overseas Americans as part of the decennial census and, more specifically, obtaining data that is of sufficient quality to be used for congressional apportionment, Congress may wish to consider eliminating funding for any additional research, planning, and development activities related to counting this population as part of the decennial headcount, including funding for tests planned in 2006 and 2008. However, funding for the evaluation of the 2004 test should continue as planned to help inform congressional decision making. Should Congress still desire better data on the number of overseas Americans, in lieu of the method tested in 2004, Congress might wish to consider authorizing and funding research on the feasibility of counting Americans abroad using alternatives to the decennial census. To facilitate congressional decision making, we recommend that the Secretary of Commerce ensure that the Bureau completes its evaluation of the 2004 overseas census test as planned. Further, to the extent that additional research is authorized and funded, the Bureau, in consultation with Congress, should explore the feasibility of counting overseas Americans using alternatives to the decennial census. Potential options include conducting a separate survey, examining how the design and archiving of various government agency administrative records might need to be refined to facilitate a more accurate count of overseas Americans, and exchanging data with other countries’ statistical agencies and censuses, subject to applicable confidentiality and other provisions. Consideration should also be given to whether the Bureau should conduct this research on its own or whether it should be contracted out to the National Academy of Sciences. The Secretary of Commerce forwarded written comments from the U.S. Census Bureau on a draft of this report on August 5, 2004, which are reprinted in the appendix. The Bureau agreed with our conclusions and recommendations. Furthermore, the Bureau noted, “should Congress request and fund” further research on counting overseas Americans, it would be equipped to do that research itself. As agreed with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time we will send copies to other interested congressional committees, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-6806 or by e-mail at daltonp@gao.gov or Robert Goldenkoff, Assistant Director, at (202) 512-2757 or goldenkoffr@gao.gov. Key contributors to this report were Ellen Grady, Lisa Pearson, and Timothy Wexler. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The U.S. Census Bureau (Bureau) has typically counted overseas members of the military, federal civilian employees, and their dependents. However, it usually excluded private citizens residing abroad. In July 2004, the Bureau completed a test of the practicality of counting all overseas Americans. GAO was asked to assess (1) whether the Bureau implemented the test consistent with its design, and (2) the lessons learned from the test results. The Bureau generally implemented the overseas census test on schedule and consistent with its research design. Still, participation was poor, with just 5,390 questionnaires returned from the three test sites--France, Kuwait, and Mexico. Moreover, because of the low response levels, obtaining those questionnaires proved to be quite expensive--around $1,450 per response, which is far costlier on a unit basis than the 2000 Census. Although the two are not directly comparable because the 2000 Census included operations not used in the overseas test, the 2000 Census cost around $56 per household. Further, boosting the response rate globally might not be practical. On the domestic front, during the 2000 Census, the Bureau spent $374 million on a months-long publicity campaign that consisted of television and other advertising that helped yield a 72-percent return rate. Replicating this level of effort on a worldwide basis would be difficult, and still would not produce a complete count. Ensuring a smooth overseas count could also stretch the Bureau's resources. For example, at each test site the Bureau encountered various challenges that needed to be resolved such as French privacy laws. Moreover, managing a complex operation from thousands of miles away also proved difficult. The approach used to count the overseas population in the 2004 test--a voluntary survey that largely relies on marketing to secure a complete count, lacks the basic building blocks of a successful census. The Bureau has done some initial research on alternatives, but all require more extensive review. Given that the Bureau already faces the difficult task of securing a successful stateside count in 2010, having to simultaneously count Americans abroad would only add to the challenges facing the Bureau.
The National Defense Authorization Act for Fiscal Year 2006 established the authority for the Section 1206 and 1207 programs. Section 1206 authorizes the Secretary of Defense to use up to $350 million each year, with the concurrence of the Secretary of State, to train and equip foreign military and nonmilitary maritime forces, such as coast guards, to conduct counterterrorist operations or to support military and stability operations in which the U.S. armed forces are a participant. The authority will expire at the end of fiscal year 2011 if it is not renewed. Section 1207 of the fiscal year 2006 NDAA provides authority for DOD to transfer up to $100 million per fiscal year to State to support reconstruction, stabilization, and security activities in foreign countries. A congressional notification describing the project is required upon the exercise of the transfer authority. The funds are subject to the authorities and limitations in the Foreign Assistance Act of 1961, the Arms Export Control Act, or any law making appropriations to carry out such acts. The funds also remain available until expended. This authority was intended to be temporary and expires at the end of fiscal year 2010. The Foreign Military Financing program has traditionally been the primary mechanism for providing training and equipment assistance to foreign military forces. State and USAID have traditionally addressed civilian reconstruction, stabilization, and security needs abroad through programs funded by several foreign operations appropriations accounts, including Development Assistance; Economic Support Funds; Freedom Support Act (now Assistance for Europe, Eurasia, and Central Asia); International Narcotics Control and Law Enforcement; Nonproliferation, Antiterrorism, Demining, and Related Programs; Peacekeeping Operations; and Transitions Initiatives. (See app. II for a description of U.S. foreign assistance programs and accounts.) For both the Section 1206 and 1207 programs, DOD and State established an interagency process to implement each program. Within DOD, the Office of the Assistant Secretary of Defense for Special Operations, Low Intensity Conflict, and Interdependent Capabilities has overall responsibility for both programs. This office coordinates primarily with State’s Bureau of Political-Military Affairs for the Section 1206 program and with State’s Office of the Coordinator for Reconstruction and Stabilization (State/S/CRS) for the Section 1207 program. DOD and State solicit project proposals for each program annually, in accordance with guidelines and project proposal instructions for each program that are revised periodically to reflect lessons learned, congressional concerns, and other considerations. Interagency boards review the proposals— approved by both the relevant U.S. combatant commander and ambassador—and select projects to recommend to the Secretaries of Defense and State for final funding approval. Once projects are approved, DOD and State may begin implementation after notification to designated congressional committees. For approved Section 1206 projects, the Defense Security Cooperation Agency assumes overall responsibility for procuring training and equipment, while security assistance officers (SAO)—posted at U.S. embassies and reporting to both the ambassador and the relevant U.S. geographic combatant commands—are responsible for coordinating in-country project implementation. For approved Section 1207 projects, country teams at U.S. embassies are responsible for implementing projects in cooperation with relevant State and USAID offices, while State/S/CRS is responsible for oversight. For fiscal years 2006 through 2009, DOD has allotted about $985 million for Section 1206 projects in 53 countries and $350 million for Section 1207 projects in 23 countries. Figures 1 and 2 depict the geographic distribution of Section 1206 and 1207 resources, respectively. (See app. III for detailed information on the geographic distribution of Section 1206 and 1207 funds.) The Section 1206 and 1207 programs incorporate a wide variety of assistance. The most common types of Section 1206 program assistance have been training and technical assistance and radios and other communications equipment. Under the Section 1207 program, the most common types of assistance activities are local government capacity development and police training and equipment. Tables 1 and 2 list the types of assistance provided by the Section 1206 and 1207 programs, respectively, and the number of countries receiving them. (See app. IV for more detailed information on the types of assistance provided through the Section 1206 and 1207 programs from fiscal years 2006 to 2009.) Figure 3 shows an example of radar and surveillance equipment provided to Malaysia and 36 other countries under the Section 1206 program to conduct coastal surveillance. The Section 1206 and 1207 programs have generally been consistent with U.S. strategic priorities relating to combating terrorism and addressing instability. DOD and State have devoted 82 percent of Section 1206 counterterrorism resources spent through fiscal year 2009 to addressing specific terrorist threats, primarily in countries designated as priorities by the U.S. government. DOD, State, and USAID devoted 77 percent of Section 1207 program resources to relatively unstable countries, mostly those the U.S. government has identified as vulnerable to state failure. Implementation of the Section 1206 program has generally been in alignment with U.S. counterterrorism priorities. Section 1206 authorizes DOD and State to build the capacity of partner nations’ national military forces to (1) conduct counterterrorist operations or (2) participate in or support military and stability operations in which the U.S. armed forces are a participant. From fiscal year 2006 to 2009, DOD and State allotted $932 million (95 percent) of all Section 1206 funding for counterterrorism- related equipment and training and $47 million (5 percent) to build the capacity of five partner nations to participate in stability operations with the United States. Overall, DOD and State have allotted 82 percent of these resources to projects that address specific terrorist threats, based on our review of approved project proposals. Furthermore, we found that most Section 1206 counterterrorism resources have been directed to countries that the U.S. intelligence community has identified as priority countries for the counterterrorism effort. The focus on specific terrorist threats increased in fiscal year 2009. In fiscal years 2007 and 2008, DOD and State allotted 75 percent ($405 million) of $536 million to fund Section 1206 projects targeted at specific terrorist threats. Proposals for the remaining projects identify global terrorist threats in general or security issues indirectly related to terrorism, such as ungoverned spaces and smuggling. For example, in the Caribbean region, several Section 1206 projects funded in fiscal years 2007 and 2008 were justified as countering a terrorist threat but did not specifically identify the source of that threat, and appeared to address narcotics trafficking more directly. In Albania, a U.S. official noted that the country received Section 1206 funding in fiscal year 2008 even though there was no significant terrorist threat there. He explained that Section 1206-funded boats would be used primarily to respond to potential security threats such as smuggling and human trafficking in coastal waters that the Albanian government had not previously patrolled. For fiscal year 2009, DOD and State issued instructions that project proposals must describe the “actual or potential terrorist threat” to be addressed and how the project responds to “an urgent and emergent threat or opportunity.” In line with these instructions, we found that 92 percent ($306 million) of the $334 million approved for fiscal year 2009 proposals identified a specific terrorist threat to be addressed (see fig. 4). The Section 1207 program has generally been consistent with U.S. stabilization priorities. According to State guidelines for the program, State uses DOD funds to provide reconstruction, stabilization, and security assistance to a foreign country for the purpose of restoring or maintaining peace and security. State has therefore indicated that countries eligible to receive Section 1207 funding should be at significant risk of instability or working to recover from instability. State uses a U.S. government source— an interagency “watchlist” developed to identify countries vulnerable to state failure—to help determine which countries could merit conflict prevention and mitigation efforts, and has established inclusion on the list as one of the criteria for a country to receive funding through the Section 1207 program. We found that most countries receiving Section 1207 funding appear on this watchlist. Further, according to our analysis of data we obtained from an independent risk forecasting firm, DOD, and USAID allotted 77 percent of Section 1207 funds to countries measuring high, very high, or extremely high levels of instability, as s in figure 5. Section 1207 projects shows that these projects address either the prevention of instability in a particular country or region or the recovery from instability or conflict. Eighteen proposals (about two-thirds) w projects to help countries recover from instability or conflict, as in Georgia, Kenya, and Lebanon. The remaining 10 proposals (about one third) were for projects that hel p prevent instability, as in Bangladesh, Panama, and the Philippines. As reported by IHS Global Insight’s Global Risk Service in the country rating section for short-term, internal political risk. IHS Global Insight is a private forecasting company that provides economic, financial, and political analyses, including risk assessments, of over 200 countries. IHS Global Insight’s Global Risk Service monitors and updates country risk assessments on a quarterly basis. The Global Risk Service political risk score is a weighted average summary of probabilities that different political events, both domestic and external, such as civil war and trade conflicts, will reduce gross domestic product growth rates. The subjective probabilities are assessed by economists and country analysts at IHS Global Insight, on the basis of a wide range of information, and are reviewed by a team to ensure consistency across countries. According to DOD and State guidelines, the Section 1206 program has generally been distinct from other train and equip programs. DOD and State have used it to address unforeseen U.S. military needs relatively quickly compared with FMF and other programs. The Section 1207 program is not distinct from other programs, as it has funded reconstruction, stability, and security-related activities that are virtually indistinguishable from those of other foreign aid programs in their content and time frames. Furthermore, using Section 1207 program funding for these projects has entailed additional implementation costs and funding delays. According to DOD and State guidelines for fiscal year 2009, the Section 1206 program should be distinct from security assistance programs in that its projects (1) address U.S. military priorities; (2) respond to urgent and emergent needs; (3) do not overlap with other State and DOD train and equip programs, such as FMF, by “backfilling” lower-priority projects unfunded by those programs; and (4) are administered with a dual key, or DOD and State interagency process, to ensure they accord with U.S. foreign policy. DOD and State have consistently used Section 1206 to address U.S. military priorities. Each U.S. geographic military command reviews proposals from U.S. embassy country teams in its area of responsibility and endorses for final submission those proposed projects that address its highest priorities. Furthermore, the U.S. Special Operations Command reviews all Section 1206 project proposals to ensure that each aligns with U.S. military strategy and ranks each proposal across the geographic combatant commands in accordance with counterterrorism priorities. Our review of approved Section 1206 project proposals indicates that projects are designed primarily to address U.S. military requirements that are also aligned with the countries’ security interests. DOD officials we interviewed described the Section 1206 program as a way to meet U.S. military priorities that they may not have been able to address without the Section 1206 program. For example, in Kazakhstan, according to a U.S. embassy official, DOD and State have used Section 1206 funds to address its priority of enhancing the country’s counterterrorism capacity in the Caspian Sea, while Kazakhstan has requested FMF funding for its priority to develop its military airlift capability. In Pakistan, U.S. officials used Section 1206 funds to increase special operations capacity to support counterterrorism operations on its western border, a U.S. military counterterrorism priority for which DOD and State had not been able to persuade the country to use FMF resources. DOD and State can use Section 1206 funds to respond to urgent and emergent needs more quickly than they have been able to do with FMF and other security assistance programs. With the Section 1206 program, DOD and State have often formulated and begun implementing projects within 1 fiscal year, while FMF projects have usually required up to 3 years of planning. U.S. geographic combatant commands and embassies submit project proposals early in the fiscal year, and DOD and State select projects for funding in the months that follow. DOD and State had already approved Section 1206 project proposals for fiscal year 2009 when we interviewed most SAOs, some of whom told us that equipment associated with those proposals had already begun to arrive in country. For example, radios approved as part of a fiscal year 2009 equipment package for Mali arrived and were installed in September of that same fiscal year. In contrast, several SAOs we interviewed in fiscal years 2009 and the beginning of 2010 were either drafting or had recently submitted FMF requests for fiscal year 2012. This requires the SAOs to plan for training and equipment relatively far in advance, without necessarily knowing what the geopolitical context will be when the countries receive the assistance. According to DOD and State officials, this process, including consultation and negotiation with partner nations, incorporating funding requests into State’s budget, and obtaining appropriations, can take up to 3 years. Because DOD and State can review and approve Section 1206 project proposals more quickly than this, SAOs have used Section 1206 projects to begin addressing new requirements that DOD may not have not foreseen when it submitted the FMF request for the same fiscal year. DOD officials we interviewed stated that the narrower goals of the Section 1206 program prevent overlap with the FMF program. They indicated that FMF program objectives have traditionally been to achieve a variety of U.S. foreign policy and partner nation military goals, which have not necessarily included counterterrorism and stability operations. For example, State has used the FMF program to strengthen bilateral relationships, gain access to foreign governments, foster long-term defense modernization of partner nations, and achieve other broad foreign policy objectives. Eight of 15 SAOs we interviewed noted that the Section 1206 projects they were implementing addressed objectives substantially different from those of the FMF program. SAOs further explained that there is no guarantee that partner nations will use FMF to fund counterterrorism and stability operations. For example, the SAO in Kazakhstan explained that FMF has been used to enhance diplomatic relations with that key ally by responding to its request for helicopters. The Section 1206 program is also distinct in that it allows the United States to provide partner countries with complete assistance packages, whereas other funding sources might provide only a portion of the aid needed to build a counterterrorism or stability operations capability. Eight of the 15 SAOs we interviewed noted that the Section 1206 program offered a unique means to provide bundled training and equipment, such as operations and maintenance training and spare parts. Agency officials in Washington, D.C., also attested that one of the unique strengths of the 1206 program is that it allows the United States to provide partner countries with comprehensive assistance packages. Of the 53 countries receiving assistance in fiscal years 2006 through 2009, 50 (94 percent) received spare parts or training, and 40 (75 percent) received both. SAOs we interviewed indicated that other programs, such as FMF, may be used to fund spare parts, or that Joint Combined Exchange Training might be used to provide additional training for foreign troops, but those programs may not be able to independently provide all the equipment and training components typical of a Section 1206 package. Although DOD and embassy officials we interviewed consistently explained why there was no overlap between Section 1206 projects and other programs, project proposals we reviewed have not always documented the distinctions. DOD and State revised program guidelines in fiscal year 2009 in response to congressional concerns regarding program overlap with counternarcotics and other funding sources. However, in reviewing the 25 approved Section 1206 project proposals for fiscal year 2009, 11 identified similar ongoing efforts that were funded by FMF or other U.S. programs. Only 1 proposal clearly explained why there was no overlap with other programs, and the remaining proposals did not specifically address this issue. Also, during our overseas visits, we observed some potential overlap between Section 1206 projects and other U.S. security assistance programs that was not explained in corresponding project proposals. In the Bahamas, DOD and State used Section 1206 program funds to provide that country with the same type of boats that State had previously provided with International Narcotics Control and Law Enforcement funding. (See fig. 6.) In Kazakhstan, DOD and State used both the Section 1206 program and the Global Peace Operations Initiative to provide equipment to a Kazakh peacekeeping unit. The Global Peace Operations Initiative has also funded training and equipment for at least 572 foreign troops worldwide for deployments to operations in Iraq and Afghanistan, which could overlap with Section 1206 program stabilization objectives. Figure 7 shows an example of U.S. assistance to Kazakhstan to build its capacity for conducting stability operations, in part by providing spare parts for its ground vehicles. DOD and State have used a dual key decision-making process for selecting Section 1206 projects, and in doing so have addressed three key practices for interagency collaboration we have previously identified. DOD and State incorporate interagency input at several stages of the Section 1206 proposal development and selection process. First, SAOs at recipient country embassies have typically developed Section 1206 project proposals—including objectives and implementation strategies—with input from State and other colleagues. For instance, 12 of the 15 SAOs we interviewed indicated that they had requested country team counterparts to at least review, if not help draft, Section 1206 proposals before submitting them. Through this process, DOD and State have defined common outcomes and joint strategies for achieving them, two key practices for interagency collaboration. Second, the relevant regional U.S. geographic combatant commander and ambassador have approved each proposal before officially submitting them to the Joint Staff for consideration. Once a proposal is submitted, a DOD-State working group reviews it and considers how Section 1206 projects will support U.S. foreign policy and foreign assistance goals. Last, the Secretary of State concurs with the Secretary of Defense’s approval of Section 1206 projects, thereby leveraging resources for mutually beneficial projects—another key practice for enhancing interagency collaboration. DOD and State guidelines indicate that Section 1207 projects should fund activities that are distinct from those of other U.S. government foreign assistance programs, and address urgent or emergent threats or opportunities that conventional foreign assistance programs cannot address in the required time frame. Section 1207 program-funded projects are consistent with the purposes stated in the law but are not distinct from activities funded by other foreign assistance programs. Overall, Section 1207 projects achieved objectives commonly addressed through a variety of other programs. In our country visits to Haiti, Georgia, and the Philippines, we observed many Section 1207 program-funded activities with objectives similar to those of prior or existing State and USAID programs in those countries. Moreover, according to State and USAID officials in those countries, the same activities implemented through Section 1207 funding could be accomplished with additional funding from traditional foreign assistance accounts, such as Economic Support Funds and Assistance for Europe, Eurasia, and Central Asia. Haiti’s Section 1207 project in fiscal year 2007 was aimed at stabilizing Cité Soleil, an urban area of Port-au-Prince, Haiti’s capital, through rapid implementation of short-term job creation activities, infrastructure improvements, and security enhancement through police training and equipment. However, from 2004 to 2006 (prior to the Section 1207 project) USAID had implemented the Haiti Transition Initiative, which attempted to stabilize urban areas, such as Cité Soleil, by rebuilding local services and infrastructure and providing short-term employment. In 2005, USAID also began the Urban Peace-Building Initiative, which attempted to stabilize urban areas, including Cité Soleil, through economic development. According to a USAID official in Haiti, this initiative was the precursor to Haiti’s Section 1207 project. USAID used existing contracts with nongovernmental organizations implementing other projects to carry out the short-term job creation and infrastructure improvements in Haiti’s Section 1207 project (see fig. 8). Georgia’s Section 1207 project in fiscal years 2008 and 2009 provided reconstruction assistance after the August 2008 Russian invasion, including support for resettlement of internally displaced persons (see fig. 9), police training and equipment, and removal of unexploded ordnance. However, according to State and USAID embassy officials, the Section 1207 project funded some activities with objectives that were previously being addressed through existing programs. For example, by amending a cooperative agreement with a nongovernmental organization partner, USAID carried out its Section 1207-funded school rehabilitation activities through an infrastructure initiative that had been operating since 2004. Also, State’s Bureau for International Narcotics and Law Enforcement’s (State/INL) plans to use Section 1207 funds to upgrade the Ministry of Interior’s emergency communications system and national criminal database were continuations of previously established State/INL programs in Georgia. The removal of unexploded ordnance to facilitate the return of internally displaced persons was carried out through a State humanitarian demining program that had been operating in Georgia for several years. The Philippines’ Section 1207 projects in fiscal years 2007 and 2009 attempt to stabilize the region of Mindanao through economic development, with a focus on infrastructure development activities as well as police training and equipment. However, USAID implemented the Section 1207 infrastructure development activities in Mindanao through an existing program—Growth with Equity in Mindanao—which had been carrying out similar activities in the region since 1995 (see fig. 10). Also, the Department of Justice has been conducting similar police training and equipment activities in the Philippines, including in Mindanao, since 2006. In addition, we reviewed all 28 approved Section 1207 proposals for fiscal years 2006 through 2009, including the 6 proposals for the countries we visited. We found that 22 proposals expand on recent or ongoing State and USAID activities funded through other foreign assistance accounts. For example, Colombia’s Section 1207 project in fiscal year 2007, which aimed to stabilize regions of that country recently freed from insurgent control, supported an interagency body of the Colombian government that the U.S. Southern Command had funded 3 years earlier. In addition, Tajikistan’s Section 1207 project in fiscal year 2008, intended to reduce the potential for conflict in unstable areas, supports community policing and local government development activities that build upon previous and continuing USAID and State initiatives. Finally, Uganda’s Section 1207 project in fiscal year 2009, aimed at reestablishing the rule of law in the north of the country, includes training for police and construction of community justice centers, which have both been implemented under previous and current USAID initiatives. In December 2009, the Congress established the Complex Crises Fund, which provides greater flexibility to USAID to prevent or respond to emerging or unforeseen complex crises overseas. The Congress appropriated $50 million for this fund, with which the Administrator of USAID can fund such programs and activities, in consultation with the Secretary of State. Furthermore, in its proposed budget for fiscal year 2011, released in February 2010, State requested another $100 million in flexible contingency funding to meet unforeseen reconstruction and stabilization needs. This request is intended to transition the funding of the Section 1207 program from DOD to State. DOD has not requested Section 1207 funding for fiscal year 2011. We found that State and USAID can provide funding to address urgent or emergent threats or opportunities just as quickly, or more quickly, through other foreign assistance programs, than through the Section 1207 program. For example, in Georgia, where DOD and State allotted $100 million in Section 1207 funds for reconstruction projects after the 2008 Russian invasion, State provided over $50 million in Economic Support Funds to start similar projects before the full amount of Section 1207 funds was available. In the Philippines, when faced with an initial delay in receiving approved Section 1207 funds in fiscal year 2007 for police training in Mindanao, State reprogrammed International Narcotics Control and Law Enforcement funds for this purpose. Furthermore, our review of Section 1207 project proposals shows that the proposals for projects in Lebanon in fiscal year 2006 and Kenya in fiscal year 2009 describe using reprogrammed funds from conventional accounts alongside Section 1207 funds to help achieve similar stabilization goals. Using Section 1207 funding for reconstruction, stabilization, and security- related projects has created a new layer of program management through State’s Office of the Coordinator for Reconstruction and Stabilization (State/S/CRS)—the office responsible for oversight of the Section 1207 program—which has entailed additional implementation costs and funding delays with negative consequences. In addition to State and USAID’s normal administrative costs for implementing an assistance project, State/S/CRS charges a fee for oversight of Section 1207 projects to cover the cost of program support and coordination from Washington, D.C., and in the field. For fiscal years 2008 and 2009, this fee totaled nearly $2.5 million, which State/S/CRS deducted from the project funds DOD transferred to State. When added to State and USAID administrative costs of nearly $5.4 million during the same period, the State/S/CRS fee represents an increase of 46 percent for overall administrative costs for Section 1207 projects during these 2 years. Furthermore, according to embassy officials we spoke to in Haiti, Georgia, and the Philippines, State/S/CRS oversight over Section 1207 has not necessarily improved project implementation or effectiveness. State and USAID officials at these embassies questioned the added value of State/S/CRS’s oversight of the Section 1207 program. According to the officials, State/S/CRS offers to coordinate interagency efforts and facilitate interagency collaboration within the country teams to help develop and execute Section 1207 projects. However, the embassy officials stated that interagency collaboration is already a part of how their country teams operate, through country team working groups and the development of mission strategic plans, and that the ambassador or deputy chief of mission can encourage such collaboration when necessary. In our discussions with State/S/CRS officials, they identified their ability to facilitate a whole-of-government approach for embassy country teams as their key added value and cited six countries—Lebanon, Nepal, Panama, Sri Lanka, Tajikistan, and Uganda—where their involvement brought benefits. However, they did not provide any documentation to support this claim. We also found that addressing urgent or emergent threats or opportunities through the Section 1207 program has caused funding delays, which have had some negative consequences. In two countries we visited, funding for State/INL-implemented activities was significantly delayed compared with funding for USAID activities within the same project. In the Philippines, U.S. embassy officials told us that State borrowed funds from an existing police assistance project in order to start its Section 1207-funded police training on time, with an understanding that Section 1207 funds would arrive quickly for reimbursement. However, the Section 1207 funds took 6 months longer than expected to arrive, which subsequently delayed the existing police assistance project by 18 months and decreased the overall quantity of equipment procured. According to officials at the U.S. embassy in Georgia, a 6-month delay in receiving State’s Section 1207 funds for law enforcement activities interfered with the embassy’s goal of simultaneously improving the security and economy in the conflict zone. We also found in our review of quarterly reporting documents for all Section 1207 projects that funding delays for State activities was an issue in at least three other countries—Bangladesh, Kenya, and Malaysia. For example, in Kenya, U.S. embassy officials reported that the delay of Section 1207 funds for State’s police assistance resulted in the postponement of a State/INL assessment visit necessary to begin providing assistance. In contrast, DOD and State implement the 1206 program within the existing management structure of FMF, under the auspices of the Defense Security Cooperation Agency. Hence, the Defense Security Cooperation Agency charges the same administrative fees for both programs and procures training and equipment at least as quickly for Section 1206 projects as for FMF. The long-term impact of Section 1206 projects is at risk, because it is uncertain whether funds will be available to sustain the military capabilities that these projects are intended to build. U.S. law and DOD and State policies limit the use of U.S. government funds for sustainment of Section 1206 projects, and most participating countries have relatively low incomes and may be unwilling or unable to provide the necessary resources. For the Section 1207 program, since State, USAID, and DOD are not restricted by law or agency policy from drawing on a variety of overlapping funding sources to continue and expand Section 1207 projects, sustainment risks are not as significant. According to State planning documents, including department- and bureau-level performance plans, helping partner nations achieve sustainable counterterrorism capabilities is a key foreign policy objective. In addition, the joint DOD and State Inspectors General report on the Section 1206 program found that continued sustainment is essential to achieving the intended objectives of the Section 1206 program and that long-term sustainability of Section 1206 projects depends on continued investment by the partner nations or U.S. government. DOD officials have noted that some Section 1206 projects are intended to address an immediate threat and may not require long-term sustainment. Nevertheless, according to Section 1206 project proposal instructions, proposals must explain how projects will be sustained in future years. However, we found that the availability of sustainment funds from the U.S. government is uncertain. DOD and State policy has potentially constrained the use of U.S. government funding for Section 1206 project sustainment. According to fiscal year 2009 program guidelines, the Section 1206 program should not fund projects that must be continued over long periods (more than 3 years) to achieve a capability for a partner nation. However, Section 1206 projects are highly dependent on U.S. funding for long-term sustainment. Prior to fiscal year 2009, 62 (56 percent) of 110 approved Section 1206 proposals we reviewed indicated that FMF resources would be used to sustain projects. Other potential sources of sustainment funds identified in proposals include partner nations’ own resources and other U.S. programs. Despite the new guidelines, 13 (52 percent) of the 25 approved fiscal year 2009 Section 1206 proposals we reviewed indicated that partner nations would use FMF resources to sustain Section 1206 projects. Furthermore, 11 (73 percent) of the 15 SAOs we interviewed had already requested or planned to request FMF resources to sustain Section 1206 projects. However, several SAOs were not certain that State would award the funds they had requested. State determines FMF allotments to recipient countries based on congressional direction and availability of funds, and at the time of our interviews, State had not finalized fiscal year 2010 allotments. Moreover, in fiscal years 2006 through 2009, 18 (34 percent) of the 53 Section 1206 recipient countries did not receive any FMF funding. While proposals continue to cite FMF for Section 1206 project sustainment, a provision of the fiscal year 2009 Omnibus Appropriations Act potentially further constrains the use of FMF to sustain Section 1206 projects. This provision, which prohibits the use of FMF funds to support or continue any Section 1206 projects unless the Secretary of State justifies such use to the Committees on Appropriations, may limit the availability of FMF for Section 1206 project sustainment. The ability of partner nations to sustain Section 1206 projects in the absence of U.S. funding is also uncertain. DOD and State have not required countries to sign formal commitments to sustain Section 1206 projects, and only 35 (26 percent) of the 135 proposals we reviewed for fiscal years 2007-2009 projects explicitly address the recipient country’s ability or willingness to bear sustainment costs. Furthermore, only 9 (7 percent) of those 135 proposals provided estimates of the project’s maintenance, operation, or other sustainment costs. Moreover, DOD and State have implemented 113 (76 percent) of 149 Section 1206 projects in low- or lower-middle-income countries, as classified by the World Bank, where funding for sustainment efforts may be scarce. Only 1 of the SAOs we interviewed in 15 countries indicated that he believed his partner nation had the ability to sustain its Section 1206 projects independently; 6 SAOs said that they did not believe their partner nations had this ability, and 8 were uncertain. For example, the SAO in Nigeria was concerned about that country’s ability to support long-term maintenance activities for the vehicles, surveillance, and other Section 1206-funded equipment. Similarly, the SAO in Mali noted that sustainment of the Section 1206 project to train and equip that country’s light infantry units would be problematic if the country had to find its own funding. Only the SAO in Malaysia believed that the partner nation would fund the necessary sustainment of its maritime surveillance projects, based on that government’s stated intention to do so. Furthermore, Section 1206 program managers at U.S. geographic combatant commands also questioned the likelihood of partner nations to sustain Section 1206 projects. For example, at the U.S. Africa Command, the Section 1206 program manager explained that while the command would prefer that partner nations budget for sustainment activities, it was unlikely this would happen. Since the Section 1207 program does not have the same statutory or policy constraints as the Section 1206 program on using other U.S. assistance program resources to sustain projects, State and USAID use other U.S. assistance program resources for this purpose. State and DOD acknowledged in fiscal year 2008 guidelines that Section 1207 projects should seek to achieve short-term security, stabilization, or reconstruction objectives that are coordinated with longer-term development efforts to be sustained by the host government, international organizations, or other forms of U.S. foreign assistance. In our visits to Haiti, Georgia, and the Philippines, we found that State and USAID have provided assistance through other projects that are similar to Section 1207 projects and help sustain and consolidate their impacts. For example, in Haiti, USAID’s implementing partners have helped support the goals of the fiscal year 2007 Section 1207 project by funding assistance activities in Cité Soleil and neighboring areas through other ongoing USAID projects. In addition, in September 2009, State and USAID officials in Haiti told us that they planned to continue efforts to stabilize Port-au-Prince by using Economic Support Funds and International Narcotics Control and Law Enforcement funding. In the Philippines, where the Section 1207 project in fiscal year 2007 has attempted to stabilize the region of Mindanao through economic development, USAID applied funds from an ongoing project in the region to supplement a Section 1207 activity—an upgrade to a local water distribution system—that required additional support. Furthermore, in our review of all 28 proposals for Section 1207 projects, we found that 21 proposals address the issue of sustainment by identifying possible sources of funding to sustain or build on project results. Among the 21 proposals, 17 identify additional U.S. foreign assistance funding as a source, 10 cite host government resources, and 5 mention other donors, such as other countries and international organizations. Only 3 proposals identify host government resources as the sole source of possible sustainment funding: Two of these are for upper-middle-income countries and the third is in a lower-middle-income country. Not every project goal funded through the Section 1207 program requires sustainment funding. For example, in Georgia’s Section 1207 project in fiscal year 2008, USAID funded a “winter wheat” initiative, which was designed as onetime assistance to provide seed, fertilizer, and other supplies so that farmers disrupted by the 2008 Russian invasion could produce a wheat crop in the months after the conflict. As a result of this initiative, the farmers harvested a better than expected wheat crop in the fall of 2009, according to the Georgian Deputy Minister of Agriculture. DOD and State have conducted little monitoring and evaluation of the Section 1206 and Section 1207 programs. DOD and State have not carried out systematic program monitoring for the Section 1206 program, and reporting has generally consisted of anecdotal information, although DOD has taken initial steps to establish such a system. For the Section 1207 program, State requires quarterly reporting on project implementation but has not analyzed this information or reported results to DOD to inform program management and funding decisions. As a result of these deficiencies, U.S. agencies have made decisions to sustain and expand both Section 1206 and 1207 projects without formal assessments of project progress or impact. The Government Performance and Results Act of 1993 requires agencies to develop objective performance measures, monitor progress on achieving goals, and report on their progress in their annual performance reports. Our previous work has noted that the lack of clear, measurable goals makes it difficult for program managers and staff to link their day-to-day efforts to achieving the agency’s intended mission. Furthermore, according to Standards for Internal Control in the Federal Government, U.S. agencies should monitor and assess the quality of performance over time. In addition, we have previously reported that key practices for enhancing and sustaining interagency collaboration include developing mechanisms to monitor, evaluate, and report the results of collaborative programs; reinforcing agency accountability for collaborative efforts through agency plans and reports; and reinforcing individual accountability for collaborative efforts through agency performance management systems. Also, the Congress has directed the Secretaries of Defense and State to report on the implementation and impact of Building Global Partnership authorities provided under the Section 1206 and 1207 authorities no later than December 31, 2010. DOD and State have not consistently defined performance measures for their Section 1206 projects, although the agencies have made some improvement in doing so. Section 1206 program guidelines and instructions for fiscal year 2007 required project proposals to identify measures of effectiveness, and in fiscal year 2008, revised instructions required project proposals to identify the anticipated outcomes. However, we found that only 27 percent (30) of 110 approved proposals for fiscal year 2007 and 2008 provided this information. DOD and State refined the instructions for fiscal year 2009 by requiring project proposals to identify measures of effectiveness. As a result, 72 percent (18 of 25) of projects approved in fiscal year 2009 include this information. Overall, DOD and State have defined measures of effectiveness or anticipated outcomes for only 32 percent (48 of 149) of all projects approved from fiscal years 2006 through 2009. Furthermore, DOD and State have not established a plan to monitor and evaluate Section 1206 program results systematically. DOD officials stated that they had not consistently monitored Section 1206 projects, and State officials were not involved with or aware of a formal evaluation process. In addition, only 34 (25 percent) of 135 approved fiscal year 2007-2009 proposals we reviewed documented an intention to monitor project results. Some SAOs we interviewed noted that embassy officials sometimes informally monitor Section 1206 project activities. For example, in Georgia, U.S. military trainers observed the use and maintenance of some Section 1206 program-funded equipment when they helped prepare troops for deployment to Afghanistan. Also, in Sri Lanka, DOD officials inspected some Section 1206 equipment when they hosted an Inspector General visit to the country. Although regular reporting on performance is an established good management practice, DOD and State have not required Section 1206 program managers to report on progress or results. Only one of the six U.S. geographic combatant commands indicated that it routinely required SAOs implementing Section 1206 projects overseas to submit regular progress reports. Furthermore, 13 of the 15 SAOs we interviewed indicated that they do not routinely submit any formal reports to DOD or State on the Section 1206 projects they implement. For example, 1 SAO indicated that no reports were required and that he had not volunteered to write any. A few SAOs noted that they report the status of equipment deliveries, but not project results or impact. DOD and State have undertaken two evaluations of the Section 1206 program, focusing largely on initial projects. The first, prepared by a contractor in July 2008, addressed fiscal year 2006 and 2007 projects in Lebanon, Yemen, Pakistan, and São Tomé and Principe. The second, prepared jointly by the DOD and State Inspectors General, focused on seven countries with projects approved in fiscal year 2006. Since DOD and State had not established objective performance measures for most of the projects reviewed, these reports relied heavily on anecdotal information to assess progress and effectiveness. These monitoring, evaluation, and reporting deficiencies may stem from DOD’s and State’s unclear assignment of roles and responsibilities for these tasks. We have previously reported that clearly identifying roles and responsibilities and establishing policies to operate across agency boundaries are key practices for enhancing interagency collaboration. However, DOD and State have not applied these practices for Section 1206 program monitoring and evaluation. Section 1206 program managers we spoke to at U.S. geographic combatant commands had varied opinions regarding who should be responsible for monitoring Section 1206 projects. For example, officials at the U.S. Central Command indicated that monitoring and evaluation should be the joint responsibility of State, relevant embassies’ chiefs of mission, U.S. geographic combatant commands, as well as the SAOs. The security assistance manager at the U.S. Africa Command understood that monitoring was a responsibility of the relevant embassy country teams. Meanwhile, DOD officials in the Office of the Secretary of Defense told us they thought the U.S. geographic combatant commands should evaluate Section 1206 projects. One project proposal indicated only that “the embassy” should be responsible for monitoring the project in question, without identifying any particular office for this task. DOD and State lack a monitoring and evaluation system; nevertheless, they have requested additional funding to sustain Section 1206 projects without documented evidence of results. SAOs have sometimes submitted FMF requests for Section 1206 project sustainment before the projects are fully implemented in order to have those funds available by the end of the 2-year period for which spare parts are typically included in Section 1206 packages. For example, the SAO in Lebanon explained that the Lebanon Armed Forces planned to use FMF to sustain its Section 1206 projects, and that he had already submitted FMF requests to that end despite the fact that most Section 1206 projects in Lebanon have not yet been fully implemented. In Ukraine, the SAO has submitted FMF requests for fiscal years 2010 through 2012, although some of the Section 1206 equipment had not yet been shipped to the country. According to a DOD official in the Office of the Assistant Secretary of Defense for Special Operations, Low Intensity Conflict, and Interdependent Capabilities—the office with overall responsibility for the Section 1206 program—DOD has begun to implement a new two-phase initiative to assess Section 1206 projects. This assessment process is intended to use both quantitative and qualitative performance-related data to form the basis for measuring progress toward desired project outcomes. For the first phase, DOD has hired a contractor to identify current Section 1206 roles, data sources, and ongoing assessment activities to develop a framework for implementing Section 1206 assessments. The contract was signed in January 2010 and the final deliverable is due 8 months later. According to the officer, the second phase will consist of using the newly designed framework to assess a sample of Section 1206 projects. In addition, the official indicated that resources would not be available to evaluate all Section 1206 projects, and that the agency had not yet determined what sample of countries would be assessed. In general, State and USAID have established measures of effectiveness for individual Section 1207 projects. In our review of all 28 approved proposals for the Section 1207 program, we found that 25 proposals identified measures of effectiveness or performance indicators. For example, in the Philippines, State and USAID indicated that they would assess the effectiveness of a Section 1207 project by measuring changes in private sector investment, the prevalence of waterborne diseases, and police response times, among other performance indicators. State and DOD first issued guidelines for Section 1207 project monitoring in January 2008, 2 years after the program began. According to these guidelines, embassies with Section 1207 projects are responsible for submitting quarterly progress reports containing both narrative and financial data to State’s Office of the Coordinator for Reconstruction and Stabilization (State/S/CRS) and to DOD’s Office of Partnership Strategy and Stability Operations. According to the guidelines, the reports should describe the project’s progress against the measures of effectiveness established in the project proposal, identify any challenges expected over the next quarter, and describe the expenditure to date on different project activities. State/S/CRS officials told us that, initially, embassies typically submitted these reports several months later than expected, but that punctuality improved after State/S/CRS hosted a Section 1207 program conference in May 2009. Since then, State/S/CRS officials said they usually receive reports within 30 days after the end of the quarter. State/S/CRS officials told us that they had not fully analyzed the quarterly reports they received. According to these officials, State/S/CRS began systematically analyzing the financial information contained in the reports in April 2009, thereby monitoring the progress of project implementation by tracking the obligation and expenditure of funds over time for each component of the projects. However, State/S/CRS officials indicated that they routinely reviewed the reports when they arrived, but had not systematically analyzed them, because of staffing shortages. Thus, State/S/CRS was not systematically monitoring project effectiveness or implementation challenges described in the narrative section of these reports as a basis for providing program oversight. In December 2009, State/S/CRS assigned an additional employee to review the narrative reports. Although Section 1207 program guidelines instruct embassies to submit quarterly reports to both State and DOD, embassies have not been sending these reports to DOD, and State/S/CRS has not forwarded them. State/S/CRS officials indicated that they have provided DOD information on problems with Section 1207 projects but not on progress or effectiveness. An official in DOD’s Office of Partnership Strategy and Stability Operations responsible for the Section 1207 program issues told us that, as of mid-December 2009, he had not received any Section 1207 quarterly reports, but that he was working with State/S/CRS to develop an evaluation process for Section 1207 projects. Because of limited monitoring and evaluation, State and DOD have made decisions about sustaining Section 1207 projects without documentation on project progress or effectiveness. For example, officials at the U.S. Southern Command told us that they did not support a proposal from the U.S. embassy in Haiti for a second Section 1207 project in fiscal year 2008 because they were not aware of the implementation progress or results of the first project. Nevertheless, State/S/CRS officials told us that the information obtained from the quarterly reports informed decisions about proposal approval and funding at the decision-making level. State/S/CRS officials told us that in January 2010 they began efforts to develop information for the congressionally required report on the implementation and impact of the Section 1207 program, which is due on December 31, 2010. In particular, State/S/CRS offered to hire evaluation specialists to help embassies receiving Section 1207 program funds in fiscal year 2009 meet the congressional reporting requirement by developing a monitoring strategy and carrying out data collection and analysis. State/S/CRS has not offered this assistance to embassies that received program funds in prior years, which represent 59 percent of all Section 1207 funding through fiscal year 2009. The Section 1206 and 1207 programs are aimed at achieving high-priority counterterrorism, stabilization, reconstruction, and security objectives for the United States. Anecdotal evidence from some early Section 1206 and 1207 projects suggests that individual projects under both programs could achieve noteworthy results, but achieving long-term results from the projects is likely to require a sustained U.S. effort, especially in poorer countries. State and USAID can continue to draw upon traditional foreign aid programs to continue nonmilitary assistance initiated under Section 1207. However, as the appropriate funding source for sustaining military assistance under Section 1206 is unclear, given current legal restrictions and agency policy, DOD and State need guidance from the Congress on how to fund longer-term assistance. Furthermore, without a rigorous monitoring and evaluation system, DOD and State have gathered little evidence to prove that the programs have been effective and whether continued funding should be provided to sustain the efforts they have initiated. The Section 1207 authority has allowed DOD to infuse existing USAID and State programs with additional resources to help those agencies achieve their objectives. However, channeling these resources through the Section 1207 authority has created a new layer of program management, which appears to be largely redundant and entails additional implementation costs and funding delays. Moreover, a new funding source for projects similar to those of the Section 1207 program may supplant the need to continue Section 1207 funding. In preparing to reauthorize U.S. national defense programs, the Congress should consider requiring the Secretaries of Defense and State to document how Section 1207 projects are distinct from those of other foreign assistance programs and that these projects incur no additional implementation costs and experience no funding delays beyond those of other foreign assistance programs. In the absence of this documentation, the Congress should consider not reauthorizing the Section 1207 program for fiscal year 2011 and, instead, appropriate funds to State and USAID programs. Recommendations for We are making five recommendations relating to the Section 1206 and Executive Action 1207 programs. For the Section 1206 program, we recommend that the Secretary of Defense, in consultation with the Secretary of State, (1) develop and implement specific plans to monitor, evaluate, and report routinely on Section 1206 project outcomes and their impact on U.S. strategic objectives; (2) base further decisions about sustaining existing Section 1206 projects on the results of such monitoring and evaluation; (3) estimate the cost of sustaining projects at the time they are proposed and, where possible, obtain a commitment from partner nations to fund those costs; and (4) seek further guidance from the Congress on what funding authorities are appropriate to sustain Section 1206 projects when the Secretary determines that (a) projects address specific terrorist and stabilization threats in high-priority countries, (b) reliable monitoring and evaluation have shown that projects are effective, and (c) partner nation funds are unavailable. For the Section 1207 program, we recommend that the Secretary of Defense, in consultation with the Secretary of State and the Administrator of USAID, develop and implement specific plans to monitor, evaluate, and report on their outcomes and their impact on U.S. strategic objectives to determine whether continued funding for these projects is appropriate under other authorities and programs. We provided a draft of this report to DOD, State, and USAID. We received written comments from all three, which we have reprinted in appendixes V, VI, and VII, respectively. The agencies also provided technical comments, which we incorporated throughout the report, as appropriate. DOD concurred with all of our recommendations. State indicated in its written comments that it appreciated the observations contained in our report and would take them into account when shaping the Complex Crises Fund, which State requested for fiscal year 2011 to replace the Section 1207 program. State noted that this new fund will solve many of the issues outlined in our report, including an unwieldy funds transfer process that has sometimes prevented as rapid a response to immediate needs as State would have preferred. State also indicated that our findings regarding the limited monitoring and evaluation for the Section 1207 program and additional administrative costs entailed by the program were contradictory, noting that State has increasingly developed and refined its monitoring and evaluation of Section 1207 projects, requiring adequate administrative costs to carry out. We disagree. While State/S/CRS had taken some steps to increase its monitoring of Section 1207 projects, it had neither systematically analyzed embassy reports on the effectiveness of Section 1207 projects nor provided these reports to its DOD counterparts responsible for the projects’ funding. Accordingly, we do not believe that these efforts justified the additional fees this office charged beyond those that State and USAID already charged to implement the projects. USAID noted in its written comments that our report highlights several issues of interest to all agencies participating in the Section 1207 process and that USAID looks forward to continuing to refine its business processes based on our review. We are sending copies of the report to the Secretaries of Defense and State and other interested parties or interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-8979 or at christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Our review encompassed all projects funded by the Department of Defense (DOD) under authorities in Sections 1206 and 1207 of the National Defense Authorization Act of Fiscal Year 2006, as amended, during fiscal years 2006 through 2009. For more in-depth project review, we focused on 18 of the 62 countries receiving assistance under these programs: Albania, the Bahamas, Georgia, Haiti, Kazakhstan, Malaysia, and the Philippines, where we visited with U.S. embassy officials and host country officials, as well as implementing partner representatives in Section 1207 recipient countries; Ethiopia, Pakistan, and Uganda, where we interviewed U.S. embassy officials in conjunction with other GAO work; and Honduras, Kenya, Lebanon, Mali, Mexico, Nigeria, Sri Lanka, and Ukraine, where we conducted interviews with security assistance officers (SAO) or other project managers via telephone. To select countries to visit, we ranked all 62 countries based on the following criteria: (1) the amount of Section 1206 and 1207 program funding a country had received in order to include countries representing a significant portion of total funding, as well as both large and small individual projects from each program; (2) the year when a country’s projects began, in order to visit mature projects; (3) the presence of both Section 1206 and Section 1207 projects in a country, in order to use our time efficiently in visiting projects from both programs in single country visits; (4) DOD and State suggestions; (5) recent GAO or DOD and State Inspectors General visits, to reduce the burden on embassies; (6) congressional interest; (7) security considerations; and (8) opportunities to consolidate the fieldwork of multiple GAO engagements. We selected the highest-ranking countries within the areas of responsibility of each of the six U.S. geographic combatant commands. For telephone interviews, we selected the next-highest-ranking country within the area of responsibility of each combatant command and four additional countries of strategic importance. The results of our work for the 18 countries we selected are not necessarily generalizable to all 62 countries receiving assistance under these programs. To assess the extent to which the Section 1206 and 1207 programs have been consistent with U.S. government strategic priorities, we conducted the following work. We interviewed DOD, State, and U.S. Agency for International Development (USAID) officials involved in implementing Section 1206 and 1207 programs and documented their views on how ongoing projects relate to U.S. strategies and priorities. At DOD we spoke to officials from the Office of the Secretary of Defense (OSD) and Joint Staff and the Defense Security Cooperation Agency, in Washington, D.C.; the U.S. Special Operations Command in Tampa, Florida; the six geographic combatant commands—the U.S. Africa Command and the U.S. European Command in Stuttgart, Germany; the U.S. Central Command in Tampa, Florida; the U.S. Northern Command in Colorado Springs, Colorado (by telephone); the U.S. Pacific Command in Honolulu, Hawaii; and the U.S. Southern Command in Miami, Florida; and the Africa Command Navy component, in Naples, Italy. At State we spoke to officials in the Bureau of Political-Military Affairs and the Office of the Coordinator for Reconstruction and Stabilization (State/S/CRS) in Washington, D.C. At USAID we spoke to officials from the Office of the Chief Operating Officer and the Bureau for Democracy, Conflict, and Humanitarian Assistance in Washington, D.C. We also interviewed U.S. embassy officials (by telephone or in person) in all 18 countries we selected. To identify U.S. strategic priorities, we also obtained and analyzed documents, such as mission strategic plans and lists of priority countries identified by the U.S. intelligence community. We analyzed Section 1206 program funding data and DOD’s priority country list to determine the percentage of funding that has been allotted for countries on this list. We calculated this amount overall and for each year to identify any trends over time. Since the list of priority countries is classified, we aggregated the information we reported from our analysis to avoid disclosing classified information. We used funding data based on allotments for each Section 1206 project, in line with DOD’s notifications to the Congress, which we determined were sufficiently reliable for our purposes. We analyzed all written project proposals for approved Section 1206 program-funded projects to determine how many of them described specific terrorist threats. DOD officials consistently identified these proposals as the most authoritative and detailed documents about each project’s purpose and objectives. In all, DOD and State have approved 92 proposals, accounting for 149 projects. No formal proposals had been submitted for the 11 projects approved in fiscal year 2006 and 3 projects approved in fiscal years 2007 and 2008. We analyzed the proposals for 135 projects from fiscal years 2007-2009: 62 projects approved in 2007, 48 projects approved in 2008, and 25 projects approved in 2009. We determined that a project proposal addressed a specific threat if it (1) provided information indicating that some terrorist act had occurred, had been attempted, or had been or was being planned for in the country/region of the project, or (2) referred to a terrorist organization or individual in the country/region of the project that posed a threat that was being targeted by the proposed project. If project proposals did not meet these criteria, we determined that they addressed a nonspecific threat. Projects that we determined fell into this second category included those that addressed the global threat of terrorism, the existence of ungoverned territory, illegal fishing, smuggling, narcotics trafficking, human trafficking, piracy, or other illegal activities not specifically tied to observable terrorist-related activity in the country/region in question. Two analysts independently reviewed all the proposals according to these criteria, and any disagreements in the determinations both made were resolved through discussion. We reviewed applicable Section 1207 program guidelines to identify the requirements related to meeting U.S. stabilization priorities. We then analyzed Section 1207 program funding data and a U.S. government watchlist identifying countries vulnerable to state failure to determine the percentage of program funding that has been allotted for countries on this list. Since the watchlist is classified, we did not present specific data from our analysis to avoid disclosing classified information. We also analyzed political risk data compiled by IHS Global Insight, a private forecasting firm, to determine the percentage of project funds that were allotted to countries categorized as having high, very high, or extremely high short- term, internal political risk. This political risk score is a weighted average summary of probabilities that different political events, both domestic and external, such as civil war and trade conflicts, will reduce gross domestic product growth rates. The subjective probabilities are assessed by economists and country analysts at Global Insight on the basis of a wide range of information, and are reviewed by a team to ensure consistency across countries. The measures are revised quarterly; the measure we used comes from the first quarter of the year after each project proposal for the corresponding country was approved, except in the case of fiscal year 2009 projects, for which we used data from the third quarter of 2009 because data from the first quarter of 2010 were not yet available at the time of our review. We combined the results for all years to indicate what percentage of total funding was allotted to countries within each political risk category. To assess the reliability of the risk rating data, we interviewed officials of IHS Global Insight and reviewed related documents describing the methods used to gather these data and the internal control mechanisms employed to ensure consistency and reliability. We also compared the risk scores of similar sources of data related to country political risk to assess overall consistency. We determined that these risk rating data were sufficiently reliable for the purpose of assessing the general level of political stability of countries receiving Section 1207 program assistance. In addition, we reviewed all 28 approved proposals relating to 25 projects in the Section 1207 program in fiscal years 2006 through 2009 and assessed the extent to which proposals were for projects to help countries recover from or prevent instability. We considered that a project proposal addressed the prevention of instability if (1) the project objectives described an attempt to prevent, deny, counter, or reduce threat(s) to stability, such as armed conflict, violence, extremism, or terrorism/terrorists, or (2) the project objectives described an attempt to strengthen or enhance stability, and (3) the project did not address recovery from a specific event or occurrence of instability. We considered that a project proposal addressed the recovery from instability if (1) the project objectives described a specific event or occurrence of instability (e.g., insurgency, war, or episodic or recurring violence) and supported postconflict reconstruction or rebuilding efforts, or (2) the project objectives described efforts to help foreign governments regain or reestablish control over territories or institutions that were previously ungoverned or under the control of criminals, terrorists, or insurgents. Two analysts independently reviewed all the proposals according to these criteria, and any disagreements in the determinations both made were resolved through discussion. We used funding data based on allotments for each Section 1207 project, in line with DOD’s notifications to the Congress, which we determined were sufficiently reliable for our purposes. To assess the extent to which the Section 1206 and 1207 programs are distinct from other U.S. programs, we conducted the following work. We reviewed applicable Section 1206 program guidelines to identify the requirements relating to project distinctness. We then reviewed all available written proposals for projects to which these requirements applied (e.g., we compared projects approved in fiscal year 2009 with fiscal year 2009 guidelines) and analyzed the information that the proposals provided to distinguish the proposed project from those funded by other security assistance programs. We categorized each proposal based on whether the proposal (1) explained the reason(s), other than the lack of available funds, that another program could not be used; (2) did not address whether the proposed project was distinct from projects funded by other programs, other than the lack of available funds; or (3) identified one or more similar or related projects funded by another program but did not explain how the proposed project was distinct. Two analysts independently reviewed all the proposals according to these criteria, and any disagreements in the determinations both made were resolved through discussion. We considered only those proposals meeting the first criterion to have documented that the proposed project was distinct. We also interviewed relevant staff—at OSD; State’s Bureau for Political- Military Affairs; all six geographic combatant commands; the Africa Command Navy component; and the U.S. embassies in Albania, the Bahamas, Georgia, Kazakhstan, the Philippines, Malaysia, Ethiopia, Pakistan, Honduras, Lebanon, Mali, Mexico, Nigeria, Sri Lanka, and Ukraine—in person or by telephone, and documented their views on the factors that distinguish Section 1206 projects from other train and equip projects that they help implement under other programs. To determine whether funding assistance under Section 1206, instead of other traditional security assistance programs, entailed additional costs or funding delays, we asked an official from the Defense Security Cooperation Agency overseeing the Section 1206 program about the fees and implementation timing under this program and Foreign Military Financing (FMF). We reviewed applicable Section 1207 program guidelines to identify the requirements relating to project distinctness. We then reviewed all proposals for projects to which these requirements applied (i.e., 28 approved proposals for fiscal years 2006 through 2009) and assessed the extent to which the proposals included information to distinguish the respective project from those funded under other foreign assistance programs. We considered a proposed project to be distinct from other projects if (1) no other related projects were identified in the proposal, or (2) the proposed project did not fund a continuation of a prior or existing program in that country, through expansion of its geographic scope or an increase in the number of identical or closely related activities. For example, we did not consider an initiative to increase funding for an existing school construction program to build additional schools in other regions of a country to be distinct. We did not consider projects to be undertaken using existing contracting mechanisms, grants, or cooperative agreements to be distinct unless the type of proposed activity funded was described as being substantially different from ongoing activities. Two analysts independently reviewed all the proposals according to these criteria, and any disagreements in the determinations both made were resolved through discussion. We reviewed quarterly reports from countries that received Section 1207 program funding for State’s Bureau of International Narcotics and Law Enforcement activities to determine if funding delays were an issue. We also reviewed Section 1207 program funding data to determine the administrative costs charged by State/S/CRS, State (at U.S. embassies), and USAID. In addition, we interviewed cognizant officials at the U.S. embassies in Georgia, Haiti, Malaysia, the Philippines, and Uganda, and documented their views of the factors that distinguish respective Section 1207 projects from other assistance activities that they help implement under other programs. We also interviewed cognizant officials at USAID, State/S/CRS, and five geographic commands and documented their views on this topic. To determine the extent to which the Section 1206 and 1207 projects have addressed the sustainment needs of executed projects, we conducted the following work. We reviewed State and USAID documents describing U.S. foreign policy goals relating to sustainment of international counterterrorism-related efforts. We also reviewed Section 1206 program guidelines to identify requirements relating to project sustainment. We then reviewed all available written proposals for projects to which these requirements applied (i.e., projects approved in fiscal year 2009) and analyzed the information that each proposal included relating to project sustainment. We identified all the sources of funding that each proposal indicated would be used to sustain the project and categorized them as Foreign Military Financing, U.S. programs other than FMF, or host country funds. We also identified those proposals that indicated that host nation funds alone would be used for sustainment. Two analysts independently reviewed all the proposals according to these criteria, and any disagreements in the determinations both made were resolved through discussion. We also interviewed cognizant officials at OSD, State’s Bureau for Political-Military Affairs, all six geographic combatant commands, the Africa Command Navy component, and the U.S. embassies in Albania, the Bahamas, Ethiopia, Georgia, Honduras, Kazakhstan, Lebanon, Malaysia, Mali, Mexico, Nigeria, Pakistan, the Philippines, Sri Lanka, and Ukraine, and documented their views regarding sustainment of ongoing Section 1206 projects. We also used the World Bank’s 2010 country income ratings to analyze the potential ability of recipient countries to independently sustain Section 1206 projects. We reviewed applicable Section 1207 program guidelines to identify the requirements relating to project sustainment. We then reviewed all available written proposals to which these requirements applied (i.e., all 28 approved proposals for fiscal years 2006 through 2009) and assessed whether each proposal included information relating to project sustainment. We identified all the sources of funding that each proposal indicated would be used to sustain the project and categorized them as U.S. government assistance, host nation funds, or non-U.S. donors or other sources. We also identified those proposals that indicated that host nation resources alone would be used for sustainment. Two analysts independently reviewed all the proposals according to these criteria, and any disagreements in the determinations both made were resolved through discussion. In addition, we interviewed relevant staff at U.S. embassies in Georgia, Haiti, Malaysia, and the Philippines, and documented their views regarding sustainment of ongoing Section 1207 projects. We also documented the views on this topic from cognizant officials at USAID, State/S/CRS, and five geographic combatant commands. For those projects where potential sustainment from U.S. or other donor sources was not addressed by project proposals, we used the World Bank’s 2010 country income ratings to analyze the potential ability of the recipient countries to independently sustain Section 1207 activities. We determined that these data were sufficiently reliable for the purpose of this analysis. To establish the extent to which the Section 1206 and 1207 programs incorporate plans for monitoring and evaluation to assess project impact and inform program implementation, we conducted the following work. We reviewed applicable Section 1206 and 1207 program guidelines, as well as authorizing legislation, the Government Performance and Results Act of 2003, and Standards for Internal Controls in the Federal Government to identify the requirements relating to project monitoring and evaluation. To determine what monitoring and evaluation has been conducted and what was planned for the Section 1206 program, we interviewed cognizant DOD and State officials in Washington, D.C., and at the six U.S. geographic combatant commands and the Africa Command Navy component, as well as U.S. officials in Albania, the Bahamas, Ethiopia, Georgia, Honduras, Kazakhstan, Lebanon, Malaysia, Mali, Mexico, Nigeria, Pakistan, the Philippines, Sri Lanka, and Ukraine in person or via telephone. We also analyzed the 135 available written project proposals to determine the extent to which they identified measurable program objectives. We considered a proposal as having a measurable objective if (1) it identified an objective or an expected outcome and a means of quantitatively or qualitatively assessing achievement of that objective or outcome, or (2) it identified a specific expected outcome, such as the establishment of a particular military capability or deployment of troops in a particular stabilization operation, specific enough that an observer could reasonably be expected to determine by objective means whether the outcome had been achieved. We did not consider a proposal as having a measurable objective if (1) it did not identify any objective or expected outcome or (2) it described the objective or expected outcome in general terms, such as achieving long-term stability or establishing an effective deterrence against extremist incursions, without identifying potential indicators or other quantitative or qualitative means to assess the achievement of that objective or outcome. Two analysts independently reviewed all the proposals according to these criteria, and any disagreements in the determinations both made were resolved through discussion. To determine what monitoring and evaluation has been conducted and what was planned for the Section 1207 program, we interviewed cognizant DOD, State, and USAID officials, as well as agency officials at five U.S. geographic combatant commands. In addition, we interviewed relevant staff at U.S. embassies in Georgia, Haiti, Malaysia, and the Philippines, and documented their views regarding monitoring and evaluation of ongoing Section 1207 projects. We also analyzed all 28 approved proposals to determine the extent to which they identified measures of effectiveness. We considered a proposal to have measures of effectiveness if it identified either quantitative or qualitative measures or performance indicators that would be used to assess the results of the proposed project. We did not require the proposal to provide detailed information about every measure or indicator that would be used, but we considered a basic description of them or examples as adequate evidence to meet the criteria. We did not consider a reference to State’s standard performance measurement structure as adequate evidence to meet our criteria unless the proposal identified which standard measures would be used. Two analysts independently reviewed all the proposals according to these criteria, and any disagreements in the determinations both made were resolved through discussion. We conducted this performance audit from February 2009 to April 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 3 describes selected U.S. foreign assistance programs and accounts that DOD, State, and USAID have traditionally used to fund training and equipment for counterterrorism and stabilization operation support and assistance related to reconstruction, security, and stabilization. Table 4 lists the recipient countries and their allotments of Section 1206 and 1207 funds for fiscal years 2006 through 2009, ranked according to total amount funding provided. Figures 11 and 12 show the allotments of Section 1206 and 1207 funds, respectively, to U.S. geographic combatant commands for fiscal years 2006 through 2009. Table 5 lists the recipients of Section 1206 funds and the type of equipment DOD and State have provided to each country for fiscal years 2006 through 2009. Table 6 lists the recipients of Section 1207 funds and the type of reconstruction, stabilization, and security assistance provided by State and USAID. Key contributors to this report include Jeffrey Phillips, Assistant Director; James Michels; Kathryn Bolduc; Robert Heilman; Martin de Alteriis; Michael Silver; Mark Dowling; John Pendleton; Marie Mak; Alissa Czyz; Jodie Sandel; Erin Smith; Thomas Costa; Kathryn Bernet; John Neumann; Michael Rohrback; Sally Williamson; Jeff Isaacs; Ophelia Robinson; Jenna Beveridge; Joseph Carney; Lynn Cothern; Anthony Pordes; and Jeremy Sebest.
In 2006, the United States created two new programs, authorized in Sections 1206 and 1207 of the Fiscal Year 2006 National Defense Authorization Act, to respond to the threats of global terrorism and instability. These programs have provided over $1.3 billion in military and nonmilitary aid to 62 countries and are due to expire in 2011 and 2010, respectively. The Congress mandated that GAO assess the programs. This report addresses the extent to which the programs (1) are consistent with U.S. strategic priorities, (2) are distinct from other programs, (3) address sustainment needs, and (4) incorporate monitoring and evaluation. GAO analyzed data and program documents from the Departments of Defense (DOD) and State (State), and the U.S. Agency for International Development (USAID), and interviewed U.S. and host country officials. The Section 1206 and 1207 programs have generally been consistent with U.S. strategic priorities. The Section 1206 program was established to build the military capacity of foreign countries to conduct counterterrorism and stabilization operations. DOD and State have devoted 82 percent of this program's funds to address specific terrorist threats, primarily in countries the U.S. intelligence community has identified as priorities for the counterterrorism effort. The Section 1207 program was established to transfer DOD funds to State for nonmilitary assistance related to stabilization, reconstruction, and security. DOD, State, and USAID have devoted 77 percent of this program's funds to countries at significant risk of instability, mostly those the United States has identified as vulnerable to state failure. Based on agency guidelines, the Section 1206 program is generally distinct from other programs, while the Section 1207 program is not. In most cases, Section 1206 projects addressed urgent and emergent counterterrorism and stabilization priorities of combatant commanders and did so more quickly than other programs, sometimes in a year, whereas Foreign Military Financing (FMF) projects can take up to 3 years to plan. DOD and embassy officials GAO spoke to consistently explained why projects do not overlap those of FMF and other programs, although project proposals GAO reviewed did not always document these distinctions. Section 1207 projects are virtually indistinguishable from those of other foreign aid programs in their content and time frames. Furthermore, the Section 1207 program has entailed additional implementation costs and funding delays beyond those of traditional foreign assistance programs, while the 1206 program has not. The uncertain availability of resources to sustain Section 1206 projects poses risks to achieving long-term impact. Enabling nations to achieve sustainable counterterrorism capabilities is a key U.S. policy goal. The long-term viability of Section 1206 projects is threatened by (1) the limited ability or willingness of partner nations to support new capabilities, as 76 percent of Section 1206 projects are in low- or lower-middle-income countries, and (2) U.S. legal and policy restrictions on using FMF and additional Section 1206 resources for sustainment. In contrast, sustainment risks for Section 1207 projects appear minimal, because State, USAID, and DOD are not restricted from drawing on a variety of overlapping funding sources to continue them. DOD and State have incorporated little monitoring and evaluation into the Section 1206 and 1207 programs. For Section 1206 projects, the agencies have not consistently defined performance measures, and results reporting has generally been limited to anecdotal information. For Section 1207 projects, the agencies have defined performance measures and State requires quarterly reporting on project implementation. However, State has not fully analyzed this information or provided it to DOD to inform program management. As a result, agencies have made decisions to sustain and expand both Section 1206 and 1207 projects without documentation of progress or effectiveness.
The overall effectiveness of the Senate’s computer controls is dependent on the controls implemented by (1) SCC, which operates the Senate mainframe computer, (2) system users, which include all Member offices and Senate Committees, and (3) the Office of Telecommunications, which maintains telecommunication equipment and networks that link system users to the SCC mainframe and to other users. In addition to processing financial systems, such as payroll and other disbursements, the SCC mainframe processes other important and confidential information, such as Senate personnel files, LEGIS—a text retrieval system for bills and other legislative information, and Capitol Police and other administrative files. System users operate about 260 local area networks (LANs) in the Washington, D.C., area and across the country that communicate with the mainframe and perform data processing functions for users. Overall, there are approximately 580 user accounts that allow access to one or more programs run by SCC. Our objective was to evaluate and test the general computer controls over the financial systems maintained and operated by the Senate Computer Center (SCC) that processed financial information for the Office of the Secretary of the Senate and the Office of the Sergeant at Arms and Doorkeeper. General computer controls, however, also affect the security and reliability of financial and nonfinancial operations processed by SCC for other Senate offices. Specifically, we evaluated controls intended to protect data, files, and programs from unauthorized access; prevent unauthorized changes to systems and applications software; provide segregation of duties among applications and systems programmers, computer operators, security administrators, and other data center personnel; ensure recovery of computer processing operations in case of a disaster or other unexpected interruption; and ensure adequate computer security administration. To evaluate these controls, we identified and reviewed SCC’s information system general control policies and procedures. Through this review and discussions with SCC staff, including programming, operations, and security personnel, we determined how the general controls should work and the extent to which data center personnel considered them to be in place. We also reviewed the installation and implementation of SCC’s systems and security software. Further, we tested and observed the operation of general controls over SCC information systems to determine whether they were in place, adequately designed, and operating effectively. Our tests included attempts to obtain access to sensitive data and programs, which were performed with the knowledge and cooperation of SCC officials. To assist in our evaluation and testing of general controls, we contracted with the public accounting firm of Deloitte & Touche LLP. We determined the scope of our contractor’s audit work, monitored its progress, and reviewed the related work papers to ensure that the resulting findings were adequately supported. We performed our work at the Senate Computer Center in Washington, D.C., from April 1995 through July 1995 in accordance with generally accepted government auditing standards. During the course of our work, we met with SCC officials to discuss our work and they informed us of the steps they planned to take or had taken to address our findings. At the conclusion of our work, we provided a draft of this report to Senate officials who said that they concurred with our findings, conclusions, and recommendations. Two basic internal control objectives for any information system are to protect data from unauthorized changes and to prevent unauthorized disclosures of sensitive data. Without effective access controls, the reliability of a computer system’s data cannot be maintained, sensitive data can be accessed and changed, and information can be inappropriately disclosed. SCC had computer security weaknesses that could result in unauthorized access to the system’s data, files, and programs. These weaknesses included ineffective (1) implementation of SCC’s access control software and (2) practices to authorize, monitor, and review user access. During the course of our work, SCC officials advised of actions they had taken or planned to take to address some of the weaknesses we identified. SCC has implemented CA-ACF2, a commercially available access control software package, to control its primary financial management system and certain batch processing. However, ACF2 was not implemented to control access to other mainframe programs, including parts of the payroll system, LEGIS, the Capitol Police system, and other administrative systems. While many programs have built-in security features, such features typically are not as comprehensive or stringent as those provided by ACF2. Common deficiencies in such programs include a lack of audit trails for user activity and few, if any, password management controls (for example, forced password changes and minimum character length passwords). By not implementing ACF2 over all its systems and programs, SCC has forfeited many of the control benefits provided by the software, and must maintain expertise in the security administration for each of these systems and programs. For example, at least one system not under ACF2 requires the use of a single shared password for access. Since all users share the same password, the system cannot provide an audit trail of a particular user’s activity, thereby limiting user accountability. SCC officials advised us that they did not plan to implement ACF2 over other mainframe applications due to (1) indications that, for some programs, less rigorous security measures are preferred by user management to provide easier accessibility, (2) resource constraints, and (3) intentions to transition from the mainframe to a decentralized network environment. However, as the transition may not be completed for up to 5 years, we believe that it is important for SCC management to assess its ongoing risks of not implementing ACF2 completely and take appropriate actions. In addition, the implementation of ACF2 over the financial management system and batch processing is not fully effective. The technical options that SCC has implemented to control access to the information on its mainframe negate many of the control benefits that the software offers. For example, ACF2 was implemented to allow up to 20 security violations, such as attempts to access data for which the user is not authorized, to occur in a single job or session before it is canceled. Similarly, a user was permitted up to 500 invalid password attempts daily before ACF2 denied access. By allowing such a high number of violations and invalid password attempts to occur, SCC increased the risk of unauthorized access and improper use or disclosure of sensitive Senate data. SCC officials advised us that they have begun changes to these ACF2 control settings, such as reducing the limits on the number of security violations and invalid password attempts. Other password controls were weakened due to ineffective ACF2 implementation. While a user’s identification (ID) typically follows a standard format that makes it easily deduced, passwords are used to authenticate the user and thus should be difficult to deduce, kept secret, and frequently changed. Most SCC users were only required to change their passwords every 180 days; some users were not required to change their passwords at all. In addition, SCC had not implemented a shorter password expiration period for users having special system or security privileges. Moreover, SCC’s current security policies did not prevent users from reusing the same password indefinitely. The longer a user is allowed to use the same password, the greater the risk that an unauthorized user may discover and use another user’s ID/password combination. SCC management advised us that it was reviewing password policies and had reduced the password change requirement to 90 days for some users and to 30 days for some special privileges and was investigating ways to restrict others. Organizations can reduce the risk that unauthorized changes or disclosures occur by (1) granting employees authority to read or modify only those programs and data that are necessary to perform their duties and (2) periodically reviewing this authority and modifying it to reflect changes in job responsibilities and terminations in employment. Having unused or unneeded user accounts increases the risk that an unauthorized user will discover and use such an account without prompt detection. In a sample of 38 SCC accounts, we found 3 assigned to individuals who had separated from Senate employment from 5 to 15 months earlier. We also found that 159, or over one quarter of the accounts, had not been used in more than 6 months. We noted that another 79 had never been used, of which 64 had existed for more than 120 days. Because initial passwords may be easily guessed, these inactive accounts present an increased risk that passwords will be compromised and unauthorized access allowed. We also identified 30 user IDs and passwords that were shared by staff in certain departments, even though these staff members have individual accounts. The use of shared IDs and passwords undermines the effectiveness of monitoring because individual accountability is lost and increases the risk of password compromise. In addition, SCC’s implementation of ACF2 allowed for unnecessary access to sensitive data and programs. Both operations and programming personnel in SCC’s Central Services Division had a level of access that was not necessary for performance of their regular job duties and could increase the risk of unauthorized disclosure or modification of sensitive data. For example, 11 applications programmers had the ability to change on-line payroll data, 11 could alter vendor information, and 2 could change financial data. Moreover, this level of access could not be monitored because no record, or log, of the access was created. We identified another area in which SCC could improve its access monitoring controls. Specifically, SCC did not implement session timeouts, which automatically log off a user’s terminal after a specified period of inactivity, over all of its programs. Lack of session timeouts increases the risk of unauthorized access to unattended terminals. SCC management was reviewing its access authorization and monitoring procedures at the time of our review and had taken or planned to take several corrective actions. Specifically, SCC management indicated that the security administrator had begun to log and monitor user access to determine what programs and files are being used. This information will be used as a basis for removing access privileges where they are not used or needed. However, where unrestricted access is deemed necessary, management plans to log and monitor it. Also, SCC management advised us that inactive user IDs were being removed from the system. Finally, SCC management was reviewing shared user IDs and passwords and planned to reduce or eliminate them. In addition to access controls, a computer system typically has other important general controls to ensure the integrity and reliability of data. These general controls include policies, procedures, and control techniques to (1) prevent unauthorized changes to system software, (2) provide appropriate segregation of duties among computer personnel, and (3) ensure continuation of computer processing operations in case of an unexpected interruption. SCC had weaknesses in the general controls over each of these areas, although its management had made or planned to make improvements in several areas. The integrity of an information system depends upon management’s clear understanding and documentation of the system. Formal processes for developing and maintaining software are important tools to assist management in ensuring that all changes to software meet user specifications, are appropriately reviewed and authorized, and include adequate security measures. SCC did not have a formal change control process to document management’s authorization and approval of routine and emergency changes to systems software. Change control procedures for the major financial programs have not been formalized to ensure that only authorized changes are made to programs and data. Inadequate management of system software changes and maintenance, including the lack of documentation, also increased the risk that back-up and recovery procedures could not be effectively performed. Also, we found instances of (1) the unintended creation of access paths to computer resources and (2) situations in which SCC staff were unsure of the purpose of undocumented systems software functions. Both of these weaknesses increased the risk of security or reliability breaches. For example, the operating system contained the names of five programs that no longer existed, introducing the risk that an unauthorized program could be run under one of those program names to gain unauthorized access to programs and data files. SCC officials were reviewing software change and maintenance procedures and planned to formalize them. Also, SCC management advised us that unused program names have been eliminated. One fundamental technique for safeguarding programs and data is the appropriate segregation of duties and responsibilities of computer personnel to reduce the risk that errors or fraud will go undetected. At SCC, we found inadequate segregation of duties, particularly in the granting of powerful security privileges. SCC has explicitly assigned two systems programmers to assist in the security administration of the access control software. Under normal circumstances, back-up security staff should report to the security administrator and have no programming, operations, or librarian duties. Because these individuals have both systems and security administrator privileges associated with their user accounts, they can eliminate any audit trail of their activity in the system. SCC officials indicated that they were reviewing this issue and considering several steps to mitigate the risks of assigning dual responsibilities. In addition, SCC has assigned powerful ACF2 security functions to many user accounts for which these privileges represent a significant security exposure. For example, 49 accounts could bypass all ACF2 controls (including the creation of audit trails, known as logging), allowing the user full and virtually undetectable access to all files, programs, and other system resources. This level of authority should generally be limited to emergency IDs, which are activated with management approval on a temporary, as-needed basis to handle problems or emergencies. SCC officials advised us that they are assessing the granting of full access to such a large number of individuals and have begun to reduce or eliminate such access. In the interim, to mitigate the risks associated with unlogged access to sensitive files, they have changed their procedures to ensure that all updates to order entry and payroll files are logged. Further, controls over security administration could be improved if the data security administrator reported directly to the SCC Director to provide adequate authority and independence in security matters. Currently, the data security administrator reports to the assistant director of the Educational Services and Support Division. The data security administrator, therefore, had customer service responsibilities, which may not be compatible with the duties associated with systems security. SCC management was considering the role and organizational placement of the data security administrator at the conclusion of our work. An agency must ensure that it is adequately prepared to cope with a loss of operational capability due to an earthquake, fire, accident, sabotage, or any other operational disruption. A detailed, current, and tested disaster recovery plan is essential to ensure that the SCC information system can promptly restore operations and data, such as payroll processing and records, in the event of a disaster. Prior to its move to its current location in 1992, SCC had an arrangement with the Library of Congress to provide back-up operations on its mainframe in the event of an emergency. However, the two mainframes are no longer compatible, so the Library of Congress back-up site cannot be used. SCC has developed a back-up capability on its mainframe in the event a portion of the machine goes down. However, in the event that the entire SCC was incapacitated, back-up processing would not be readily available. SCC has advised us that the Sergeant at Arms has since contracted with a commercial vendor to provide off-site back-up processing facilities in the event of an emergency. Further, SCC management has advised us that it has begun to develop a disaster recovery plan. Once developed, it will be important that SCC implement and periodically test the plan at the back-up facility, identifying the objectives and expected results for use in evaluating test results. The Senate’s lack of a comprehensive strategic plan for computer security administration contributed to the general control weaknesses in SCC operations. Such a plan would consider all Senate computer resources, and include SCC, the Office of Telecommunications, and users in a comprehensive policy for security awareness and administration. Development and implementation of a comprehensive strategic plan will become more important as SCC and its customers continue moving from an environment in which all major applications are processed on a mainframe to a decentralized network environment distributed throughout the Senate. In a distributed processing environment, an integrated security plan is crucial for coordinating control over multiple locations, numerous hardware and software installations, and numerous paths of communication. For example, given the large number of possible access sites throughout the Senate, external access is a significant area of exposure and should be considered in any overall security plan. Without a comprehensive strategy, duplication of some controls and omission of others are likely to occur, adversely affecting both efficiency and effectiveness. As part of our audits of receipts and disbursements, we evaluated assertions made by the Secretary of the Senate and the Sergeant at Arms and Doorkeeper that internal controls in place on September 30, 1994, were effective in safeguarding assets from material loss, assuring material compliance with relevant laws and regulations, and assuring that there were no material misstatements in the financial statements. We considered the effect of general computer control weaknesses and determined that other management controls mitigated their effect on the statements of disbursements, receipts, and financing sources for the two audited entities. Both of these offices use SCC resources to process financial information that is essential to their operations. The Senate Disbursing Office, a part of the Office of the Secretary, uses SCC to process payroll and personnel information and to maintain vendor information. The Senate Disbursing Office maintains its own accounting system, which is used to process other disbursements and report all Senate financial transactions. The Sergeant at Arms and Doorkeeper uses SCC to process its accounting and equipment inventory systems. The Senate Disbursing Office performs various control procedures to ensure that data are properly authorized and entered into the system, including comparison of system reports with supporting documents at various stages of processing. Also, the Senate Disbursing Office distributes monthly reports to the Secretary of the Senate and the Sergeant at Arms and Doorkeeper that list payroll and other disbursements made on their behalf. The offices then review the monthly reports for accuracy. Both the Secretary of the Senate and the Sergeant at Arms and Doorkeeper reconcile the nonpayroll information to their own independent records to ensure that disbursements are consistent with the approved requests for payment that they submitted. Any differences discovered by reviews or reconciliations are discussed with the Senate Disbursing Office and resolved. Finally, the Secretary of the Senate publishes a semiannual public report that summarizes payroll information by employee and details the individual disbursements of the entire Senate. The Senate’s general computer control weaknesses could result in serious breaches in the security of its sensitive data and programs, such as those related to payroll and personnel. A comprehensive strategic plan that integrates and controls access and processing for all Senate files, programs, and data is crucial to ensuring that Senate computer resources are adequately safeguarded. As the Senate moves to a distributed processing environment, development and implementation of a comprehensive computer security plan will become even more important. To correct the existing weaknesses at the Senate Computer Center, we recommend that you direct the Sergeant at Arms and Doorkeeper to take the following actions. Develop and implement policies and procedures to limit access for the system’s users to only those computer programs and data needed to perform their duties. Access controls should be improved by (1) effectively utilizing SCC’s access control software, including assessing ongoing risks of incomplete implementation and taking appropriate control measures, (2) strengthening procedures to authorize, monitor, and review user access, and (3) implementing session timeout procedures. Develop and implement policies and procedures for controlling software changes, including requiring documentation for the purpose of the change, management review and approval, and independent testing. Provide for appropriate segregation of computer duties, including upgrading the position of data security administrator to allow for appropriate independence and authority. Develop, implement, and test a disaster recovery plan for all critical SCC operations. In addition, to improve Senatewide computer security, we recommend that you direct that the Senate develop and implement a comprehensive strategic plan that integrates and controls access and processing for all Senate files, programs, and data. We are sending copies of this report to the Sergeant at Arms and Doorkeeper of the U.S. Senate and to the Secretary of the Senate. Copies will be made available to others upon request. Please contact me at (202) 512-9489 if you or your staffs have any questions. Major contributors to this report are listed in appendix I. Shannon Cross Robert Dacey Francine Delvecchio Sharon Kittrell Crawford Thompson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO evaluated and tested the general computer controls that affect the overall security and effectiveness of the Senate Computer Center's (SCC) financial systems, focusing on whether those controls: (1) protect data, files, and programs from unauthorized access; (2) prevent unauthorized changes to systems and applications software; (3) provide segregation of duties among computer, security, and other SCC personnel; (4) ensure recovery of computer processing operations in case of unexpected interruption; and (5) ensure adequate computer security administration. GAO found that: (1) SCC general computer controls do not adequately protect sensitive data files and computer programs from unauthorized disclosure and modification; (2) SCC has not fully implemented its access control software to control access to other mainframe programs due to a preference for easier access, resource constraints, the planned transition to decentralized networks, conflicting technical options, and poor access monitoring capabilities; (3) SCC lacks formal software change control and documentation procedures; (4) SCC has not adequately segregated computer duties, particularly regarding security privileges; (5) although SCC is developing off-site disaster recovery and contingency capabilities, the Senate could be exposed to significant security risk as it moves toward a decentralized network environment because it does not have a comprehensive strategic plan for its computer resources; and (6) the two Senate offices responsible for Senate receipts and disbursements supplement SCC general computer management controls to ensure data integrity and authorization when reconciling disbursement information with independent records.
Influenza is more severe than some viral respiratory infections, such as the common cold. During an annual influenza season, most people who contract influenza recover completely in 1 to 2 weeks, but some develop serious and potentially life-threatening medical complications, such as pneumonia. People aged 65 years and older, people of any age with chronic medical conditions, children younger than 2 years, and pregnant women are generally more likely than others to develop severe complications from influenza. In an average year in the United States, more than 36,000 individuals die and more than 200,000 are hospitalized from influenza and related complications. Pandemic influenza differs from annual influenza in several ways. According to the World Health Organization, pandemic influenza spreads to all parts of the world very quickly, usually in less than a year, and can sicken more than a quarter of the global population, including young, healthy individuals. Although health experts cannot predict with certainty which strain of influenza virus will be involved in the next pandemic, they warn that the avian influenza virus identified in the human cases in Asia, known as H5N1, could lead to a pandemic if it acquires the genetic ability, so far absent, to spread quickly from person to person. Vaccination is the primary method for preventing influenza and its complications. Produced in a complex process that involves growing viruses in millions of fertilized chicken eggs, influenza vaccine is administered each year to protect against particular influenza strains expected to be prevalent that year. Experience has shown that vaccine production generally takes 6 or more months after a virus strain has been identified; vaccines for certain influenza strains have been difficult to mass-produce. After vaccination for the annual influenza season, it takes about 2 weeks for the body to produce the antibodies that protect against infection. According to CDC recommendations, the optimal time for annual vaccination is October through November. Because the annual influenza season typically does not peak until January or February, however, in most years vaccination in December or later can still be beneficial. At present, two vaccine types are recommended for protection against influenza in the United States: an inactivated virus vaccine injected into muscle and a live virus vaccine administered as a nasal spray. The injectable vaccine—which represents the large majority of influenza vaccine administered in this country—can be used to immunize both healthy individuals and individuals at highest risk for severe complications, including those with chronic illness and those aged 65 years and older. The nasal spray vaccine, in contrast, is currently approved for use only among healthy individuals aged 5 to 49 years who are not pregnant. For the 2003–04 influenza season, two manufacturers—one with production facilities in the United States (sanofi pasteur) and one with production facilities in the United Kingdom (Chiron)—produced about 83 million doses of injectable vaccine, which represented about 96 percent of the U.S. vaccine supply. A third U.S. manufacturer (MedImmune) produced the nasal spray vaccine. For the 2004–05 influenza season, CDC and its Advisory Committee on Immunization Practices (ACIP) initially recommended vaccination for about 188 million people in designated priority groups, including roughly 85 million people at high risk for severe complications. On October 5, 2004, however, Chiron announced that it could not provide its expected production of 46–48 million doses—about half the expected U.S. influenza vaccine supply. Although vaccination is the primary strategy for protecting individuals who are at greatest risk of severe complications and death from influenza, antiviral drugs can also help to treat infection. If taken within 2 days of a person’s becoming ill, these drugs can ease symptoms and reduce contagion. In the event of a pandemic, such drugs could lower the number of deaths until a pandemic influenza vaccine became available. Four antiviral drugs have been approved by the Food and Drug Administration (FDA) for treatment of influenza: amantadine, rimantadine, oseltamivir, and zanamivir. HHS has primary responsibility for coordinating the nation’s response to public health emergencies. Within HHS, CDC is one of the agencies that protect the nation’s health and safety. CDC’s activities include efforts to prevent and control diseases and to respond to public health emergencies. CDC and ACIP recommend which population groups should be targeted for vaccination each year and, when vaccine supply allows, recommend that any person who wishes to decrease his or her risk of influenza be vaccinated. In addition, the National Vaccine Program Office is responsible for coordinating and ensuring collaboration among the many federal agencies involved in vaccine and immunization activities; the office also issued a draft national pandemic influenza preparedness plan in August 2004. Preparing for and responding to an influenza pandemic differ in several respects from preparing for and responding to an annual influenza season. For example, past influenza pandemics have affected healthy young adults who are not typically at high risk for severe influenza-related complications, so the groups given priority for early vaccination may differ from those given priority in an annual influenza season. In addition, according to CDC, a vaccine probably would not be available in the early stages of a pandemic. Shortages of vaccine would therefore be likely during a pandemic, potentially creating a situation more challenging than a shortage of vaccine for an annual influenza season. One lesson learned from the 2004–05 season that is relevant to a future vaccine shortage in either an annual influenza season or a pandemic is the importance of planning before a shortage occurs. At the time the influenza vaccine shortage became apparent, the nation lacked a contingency plan specifically designed to respond to a severe vaccine shortage. The absence of such a plan led to delays and uncertainty on the part of many state and local entities on how best to ensure access to vaccine during the shortage by individuals at high risk of severe complications and others in priority groups. Faced with the unanticipated shortfall, CDC redefined the priority groups it had recommended for vaccination and asked sanofi pasteur, the remaining manufacturer of injectable vaccine, to suspend distribution until the agency completed its assessment of the shortage’s extent and developed a plan to distribute the manufacturer’s remaining vaccine to providers serving individuals in the priority groups. Developing and implementing this distribution plan took time and led to delays in response and some confusion at state and local levels. Our work showed that several areas of planning are particularly important for enhancing preparedness before a similar situation occurs in the future, including defining the responsibilities of federal, state, and local officials; using emergency preparedness plans and emergency health directives; and facilitating the distribution and administration of vaccine. Clearly defining responsibilities of federal, state, and local officials can minimize confusion. During the 2004–05 vaccine shortage, even though CDC worked with states and localities to coordinate roles and responsibilities, problems occurred. For example, CDC worked with national professional associations to survey long-term-care providers throughout the country to determine if seniors had adequate access to vaccine. Maine and other states, however, also surveyed their long-term- care providers to make the same determination. This duplication of effort expended additional resources, burdened some long-term-care providers in the states, and created confusion. Emergency preparedness plans help coordinate local response. State and local health officials in several locations we visited reported that using existing emergency plans or incident command centers (the organizational systems set up specifically to handle the response to emergency situations) helped coordinate effective local responses to the vaccine shortage. For example, public health officials from Seattle–King County said that using the county’s incident command system played a vital role in coordinating an effective and timely local response and in communicating a clear message to the public and providers. In addition, according to public health officials, emergency public health directives helped ensure access to vaccine by supporting providers in enforcing the CDC recommendations and in helping to prevent price gouging in certain states. Partnerships between the public and private sectors can facilitate distribution and administration of vaccine. In San Diego County, California, for example, local health officials worked with a coalition of partners in public health, private businesses, and nonprofit groups throughout the county. Other mechanisms facilitated administering the limited supply of influenza vaccine to those in high-risk or other priority groups. In Stearns County, Minnesota, for example, public health officials worked with private providers to implement a system of vaccination by appointment. Rather than standing in long lines for vaccination, individuals with appointments went to a clinic during a given time slot. Although an influenza pandemic may differ in some ways from an annual influenza season, experience during the 2004–05 shortage illustrated the importance of having contingency plans in place ahead of time to prevent delays when timing is critical. Some health officials indicated that, as a result of the experience with the influenza vaccine shortage, they were revising state and local preparedness plans or modifying command center protocols to prepare for future emergencies. For example, experiences during the 2004–05 influenza season led Maine state officials to recognize the need to speed completion of their pandemic influenza preparedness plan. Over the past 5 years, we have reported on the importance of planning to address critical issues such as how vaccine will be purchased and distributed; how population groups will be given priority for vaccination; and how federal resources should be deployed before the nation faces a pandemic. We have also urged HHS to complete its pandemic preparedness and response plan, which the department released in draft form in August 2004. This draft plan described options for vaccine purchase and distribution and provided planning guidance to state and local health departments. As we testified earlier, however, the draft plan lacked clear guidance on potential priority groups for vaccination in a pandemic, and key questions remained about the federal role in purchasing and distributing vaccine. The experience in 2004–05 also highlighted the importance of finalizing such planning details. On November 2, 2005, HHS released its pandemic influenza plan. We did not, however, have an opportunity to review the plan before issuing this statement to determine whether the plan addresses these critical issues. A second lesson from the experience of the 2004–05 vaccine shortage that is relevant to future vaccine shortages in either an annual influenza season or a pandemic is the importance of streamlined mechanisms to make vaccine available in an expedited manner. For example, HHS began efforts to purchase foreign vaccine that was licensed for use in other countries but not the United States shortly after learning in October 2004 that Chiron would not supply any vaccine. The purchase, however, took several months to complete, and so vaccine was not available to meet the fall 2004 demand; by the end of the season, this vaccine had not been used. In addition, recipients of this foreign vaccine could have been required to sign a consent form and follow up with a health care worker after vaccination—steps that, according to health officials we interviewed in several states, would be too cumbersome to administer. Some states’ experience during the 2004–05 vaccine shortage also highlighted the importance of mechanisms to transfer available vaccine quickly and easily from one state to another; the lack of mechanisms to do so delayed redistribution to some states. During the 2004–05 shortage, some state health officials reported problems with their ability to purchase vaccine, both in paying for vaccine and in administering the transfer process. Minnesota, for example, tried to sell its available vaccine to other states seeking additional vaccine for their priority populations. According to federal and state health officials, however, certain states lacked the funding or flexibility under state law to purchase the vaccine when Minnesota offered it. As we have previously testified, establishing the funding sources, authority, or processes for quick public-sector purchases may be needed as part of pandemic preparedness. Recognizing the need for mechanisms to make vaccine available in a timely manner in the event of a pandemic, HHS has taken some action to address the fragility of the current influenza vaccine market. In its budget request for fiscal year 2006, CDC requested $30 million to enter into guaranteed-purchase contracts with vaccine manufacturers to help ensure vaccine supply. According to the agency, maintaining an abundant supply of annual influenza vaccine is critically important for improving the nation’s preparedness for an influenza pandemic. HHS is also taking steps toward developing a supply of vaccine to protect against avian influenza strains that could be involved in a pandemic. Experience during the 2004–05 shortage also illustrated the critical role communication plays when demand for vaccine exceeds supply and information about future vaccine availability is uncertain, as could happen in a future annual influenza season or a pandemic. During the 2004–05 shortage, CDC communicated regularly through a variety of media as the situation evolved. State and local officials, however, identified several communication lessons for future seasons or if an influenza pandemic occurred: Consistency among federal, state, and local communications is critical for averting confusion. State health officials reported several cases where inconsistent messages created confusion. Health officials in California, for example, reported that local radio stations in the state were running two public service announcements simultaneously—one from CDC advising those aged 65 years and older to be vaccinated, and one from the state advising those aged 50 years and older to be vaccinated. Disseminating clear, updated information is especially important when responding to changing circumstances. Beginning in October 2004, CDC asked individuals who were not in a high-risk group or another priority group to forgo or defer vaccination; this message, however, did not include instructions to check back with their providers later in the season, when more vaccine had become available. According to CDC, an estimated 17.5 million individuals specifically deferred vaccination to save vaccine for those in priority groups; local health officials said that many did not return when vaccine became available. Using diverse media helps reach diverse audiences. During the 2004–05 influenza season, public health officials emphasized the value of a variety of communication methods—such as telephone hotlines, Web sites, and bilingual radio advertisements—to reach as many individuals as possible and to increase the effectiveness of local efforts to raise vaccination rates. In Seattle–King County, Washington, for example, health department officials reported that a telephone hotline was important because some seniors did not have Internet access. Public health officials in Miami-Dade County, Florida, said that bilingual radio advertisements promoting influenza vaccine for those in priority groups helped increase the effectiveness of local efforts to raise vaccination rates. Education can alert providers and the public to prevention alternatives. In the 2004–05 shortage, some of the nasal spray vaccine for healthy individuals went unused, in part because of fears that the vaccine was too new and untested or that the live virus in the nasal spray could be transmitted to others. Further, public health officials we interviewed said that education about all available forms of prevention, including the use of antiviral medications and good hygiene practices, can help reduce the spread of influenza. Experience during the 2004–05 influenza vaccine shortage highlights the need to prepare the nation for handling future shortages in either an annual influenza season or an influenza pandemic. In particular, that season’s shortage emphasized the vital need for early planning, mechanisms to make vaccine available, and effective communication to ensure available vaccine is targeted to those who need it most. As our work over the past 5 years has noted, it is important for federal, state, and local governments to develop and communicate plans regarding critical issues—such as how vaccine will be purchased and distributed, which population groups are likely to have priority for vaccination, and what communication strategies are most effective—before we face another shortage of annual influenza vaccine or, worse, an influenza pandemic. For further information about this statement, please contact Marcia Crosse at (202) 512-7119 or crossem@gao.gov. Kim Yamane, Assistant Director; George Bogart; Ellen W. Chu; Nicholas Larson; Jennifer Major; and Terry Saiki made key contributions to this statement. Influenza Vaccine: Shortages in 2004–05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05- 863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreaks. GAO-04-1100T. Washington, D.C.: September 28, 2004. Public Health Preparedness: Response Capacity Improving, but Much Remains to Be Accomplished. GAO-04-458T. Washington, D.C.: February 12, 2004. Flu Vaccine: Steps Are Needed to Better Prepare for Possible Future Shortages. GAO-01-786T. Washington, D.C.: May 30, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. Influenza Pandemic: Plan Needed for Federal and State Response. GAO- 01-4. Washington, D.C.: October 27, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Concern has been rising about the nation's preparedness to respond to vaccine shortages that could occur in future annual influenza seasons or during an influenza pandemic--a global influenza outbreak. Although the timing or extent of a future influenza pandemic cannot be predicted, studies suggest that its effect in the United States could be severe, and shortages of vaccine could occur. For the 2004-05 annual influenza season, the nation lost about half its expected influenza vaccine supply when one of two major manufacturers announced in October 2004 that it would not release any vaccine. GAO examined federal, state, and local actions taken in response to the shortage, including lessons learned. The nation's experience during the unexpected 2004-05 vaccine shortfall offers insights into some of the challenges that government entities will face in a pandemic. GAO was asked to provide a statement on lessons learned from the 2004-05 vaccine shortage and their relevance to planning and preparing for similar situations in the future, including an influenza pandemic. This statement is based on a GAO report, Influenza Vaccine: Shortages in 2004-05 Season Underscore Need for Better Preparation (GAO-05-984), and on previous GAO reports and testimonies about influenza vaccine supply and pandemic preparedness. A number of lessons emerged from federal, state, and local responses to the 2004-05 influenza vaccine shortage that carry implications for handling future vaccine shortages in either an annual influenza season or an influenza pandemic. First, limited contingency planning slows response. At the start of the 2004-05 influenza season, when the supply shortfall became apparent, the nation lacked a contingency plan specifically to address severe shortages. The absence of such a plan led to delays and uncertainties on the part of state and local public health entities on how best to ensure access to vaccine by individuals at high risk of severe influenza-related complications. Second, streamlined mechanisms to expedite vaccine availability are key to an effective response. During the 2004-05 shortage, for example, federal purchases of vaccine licensed for use in other countries but not the United States were not completed in time to meet peak demand. Some states' experience also highlighted the importance of mechanisms to transfer available vaccine quickly and easily from one state to another. Third, effective response requires clear and consistent communication. Consistency among federal, state, and local communications is critical for averting confusion. State and local health officials also emphasized the value of updated information when responding to changing circumstances, using diverse media to reach diverse audiences, and educating providers and the public about prevention alternatives. Over the past 5 years, GAO has urged the Department of Health and Human Services (HHS) to complete its plan to prepare for and respond to an influenza pandemic. GAO has reported on the importance of planning to address critical issues such as how vaccine will be purchased and distributed; how population groups will be given priority for vaccination; and how federal resources should be deployed before the nation faces a pandemic. On November 2, 2005, HHS released its pandemic influenza plan. GAO did not have the opportunity to review the plan before issuing this statement to determine the extent to which the plan addresses these critical issues.
The LDA requires lobbyists to register with the Secretary of the Senate and the Clerk of the House and to file quarterly reports disclosing their lobbying activity. Lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point. Registrations and reports must be publicly available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. No specific statutory requirements exist for lobbyists to generate or maintain documentation in support of the information disclosed in the reports they file. However, guidance issued by the Secretary of the Senate and the Clerk of the House recommends that lobbyists retain copies of their filings and documentation supporting reported income and expenses for at least 6 years after they file their reports. The LDA requires that the Secretary of the Senate and the Clerk of the House provide guidance and assistance on the registration and reporting requirements and develop common standards, rules, and procedures for LDA compliance. The Secretary of the Senate and the Clerk of the House review the guidance semiannually. It was last reviewed December 15, 2014. The last revision was on February 15, 2013, to (among other issues) update the reporting thresholds for inflation. The guidance provides definitions of terms in the LDA, elaborates on the registration and reporting requirements, includes specific examples of different scenarios, and provides explanations of why certain scenarios prompt or do not prompt disclosure under the LDA. The Secretary of the Senate and Clerk of the House’s Offices told us they continue to consider information we report on lobbying disclosure compliance when they periodically update the guidance. In addition, they told us they e-mail registered lobbyists quarterly on common compliance issues and reminders to file reports by the due dates. The LDA defines a lobbyist as an individual who is employed or retained by a client for compensation, who has made more than one lobbying contact (written or oral communication to a covered executive or legislative branch official made on behalf of a client), and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who lobby on behalf of a client other than that person or entity. Lobbying firms are required to register with the Secretary of the Senate and the Clerk of the House for each client if the firms receive or expect to receive over $3,000 in income from that client for lobbying activities. Lobbyists are also required to submit a quarterly report, also known as an LD-2 report (LD-2), for each registration filed. The LD-2s contain information that includes: the name of the lobbyist reporting on quarterly lobbying activities and the name of the client for whom the lobbyist lobbied; a list of individuals who acted as lobbyists on behalf of the client during the reporting period; whether any lobbyists served in covered positions in the executive or legislative branch in the previous 20 years; codes describing general issue areas, such as agriculture and education; a description of the specific lobbying issues; houses of Congress and federal agencies lobbied during the reporting reported income (or expenses for organizations with in-house lobbyists) related to lobbying activities during the quarter (rounded to the nearest $10,000). The LDA also requires lobbyists to report certain political contributions semiannually in the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each lobbying firm registered to lobby and by each individual listed as a lobbyist on a firm’s lobbying report. The lobbyists or lobbying firms must list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which they made contributions equal to or exceeding $200 in the aggregate during the semiannual period; report contributions made to presidential library foundations and presidential inaugural committees; report funds contributed to pay the cost of an event to honor or recognize a covered official, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official, or to pay the costs of a meeting or other event held by or in the name of a covered official; and certify that they have read and are familiar with the gift and travel rules of the Senate and House and that they have not provided, requested, or directed a gift or travel to a member, officer, or employee of Congress that would violate those rules. The Secretary of the Senate and the Clerk of the House, along with USAO are responsible for ensuring LDA compliance. The Secretary of the Senate and the Clerk of the House notify lobbyists or lobbying firms in writing that they are not complying with the LDA reporting. Subsequently, they refer those lobbyists who fail to provide an appropriate response to USAO. USAO researches these referrals and sends additional noncompliance notices to the lobbyists or lobbying firms, requesting that they file reports or terminate their registration. If USAO does not receive a response after 60 days, it decides whether to pursue a civil or criminal case against each noncompliant lobbyist. A civil case could lead to penalties up to $200,000 for each violation, while a criminal case—usually pursued if a lobbyist’s noncompliance is found to be knowing and corrupt—could lead to a maximum of 5 years in prison. Generally, under the LDA, within 45 days of being employed or retained to make a lobbying contact on behalf of a client, the lobbyist must register by filing an LD-1 form with the Clerk of the House and the Secretary of the Senate. Thereafter, the lobbyist must file quarterly disclosure (LD-2) reports detailing the lobbying activities. Of the 2,950 new registrations we identified for the third and fourth quarters of 2013 and first and second quarters of 2014, we matched 2,659 of them (90 percent) to corresponding LD-2 reports filed within the same quarter as the registration. These results are consistent with the findings we have reported in prior reviews. We used the House lobbyists’ disclosure database as the source of the reports and used an electronic matching algorithm that allows for misspellings and other minor inconsistencies between the registrations and reports. Figure 1 shows lobbyists filed disclosure reports as required for most new lobbying registrations from 2010 through 2014. For selected elements of lobbyists’ LD-2 reports that can be generalized to the population of lobbying reports, unless otherwise noted, our findings have been consistent from year to year. We used tests that adjusted for multiple comparisons to assess the statistical significance of changes over time. Most lobbyists reporting $5,000 or more in income or expenses provided written documentation to varying degrees for the reporting elements in their disclosure reports. For this year’s review, lobbyists for an estimated 93 percent of LD-2 reports provided written documentation for the income and expenses reported for the third and fourth quarters of 2013 and the first and second quarters of 2014. Figure 2 shows that for most LD-2 reports, lobbyists provided documentation for income and expenses for sampled reports from 2010 through 2014. Figure 3 shows that for some LD-2 reports, lobbyists rounded their income or expenses incorrectly. We identified 21 percent of reports as having rounding errors. We have found that rounding difficulties have been a recurring issue for LD-2 reports from 2010 through 2014.lobbyists who reported expenses told us that based on their reading of the LD-2 form they believed they were required to report the exact amount. While this is not consistent with the LDA or the guidance, this may be a source of some of the confusion regarding rounding errors. In 2014, 6 percent of lobbyists reported the exact amount of income or expenses. Lobbyists for an estimated 94 percent of LD-2 reports filed year-end 2013 reports for all lobbyists listed on the report as required. All but two lobbying firm filed LD-203s for the lobbying firm itself before we preformed our check. The firm that had not filed an LD-203 filed one as soon as we brought it to the firm’s attention. The other firm did not respond to our request for the information about filing the LD-203. Figure 8 shows that lobbyists for most lobbying firms filed contribution reports as required in our sample from 2010 through 2014. All individual lobbyists and lobbying firms reporting lobbying activity are required to file LD-203 reports semiannually, even if they have no contributions to report, because they must certify compliance with the gift and travel rules. The LDA requires a lobbyist to disclose previously held covered positions when first registering as a lobbyist for a new client. This can be done either on the LD-1 or on the LD-2 quarterly filing when added as a new lobbyist. This year, we estimate that 14 percent of all LD-2 reports did not properly disclose one or more previously held covered positions as required. Figure 9 shows the extent to which lobbyists failed to properly disclose one or more covered positions as required from 2010 through 2014. Lobbyists amended 19 of the 100 LD- 2 disclosure reports in our original sample to make changes to previously reported information after we contacted them. Of the 19 reports, 10 were amended after we notified the lobbyists of our review, but before we met with them. An additional 9 of the 19 reports were amended after we met with the lobbyists to review their documentation. We consistently find a notable number of amended LD-2 reports in our sample each year following notification of our review. This suggests that sometimes our contact spurs lobbyists to more closely scrutinize their reports than they would have without our review. Table 1 lists reasons lobbying firms in our sample amended their LD-1 or LD-2 reports. As part of our review, we compared contributions listed on lobbyists’ and lobbying firms’ LD-203 reports against those political contributions reported in the FEC database to identify whether political contributions were omitted on LD-203 reports in our sample. The sample of LD-203 reports we reviewed contained 80 reports with contributions and 80 reports without contributions. We estimate that overall for 2014, lobbyists failed to disclose one or more reportable contributions on 4 percent of reports. Table 2 illustrates that from 2010 through 2014 most lobbyists disclosed FEC reportable contributions on their LD-203 reports as required. In 2014, nine LD-203 reports were amended in response to our review. As part of our review, 93 different lobbying firms were included in our 2014 sample of LD-2 disclosure reports. Consistent with prior reviews, most lobbying firms reported that they found it “very easy” or “somewhat easy” to comply with reporting requirements. Of the 93 different lobbying firms in our sample, 16 reported that the disclosure requirements were “very easy,” 54 reported them “somewhat easy,” and 13 reported them “somewhat difficult” or “very difficult”. (See figure 10). Most lobbying firms we surveyed rated the definitions of terms used in LD-2 reporting as “very easy” or “somewhat easy” to understand with regard to meeting their reporting requirements. This is consistent with prior reviews. Figures 11 through 15 show what lobbyists reported as their ease of understanding the terms associated with LD-2 reporting requirements from 2010 through 2014. USAO officials stated that they continue to have sufficient personnel resources and authority under the LDA to enforce reporting requirements, including imposing civil or criminal penalties for noncompliance. Noncompliance refers to a lobbyist’s or lobbying firm’s failure to comply with the LDA. According to USAO officials, they have one contract paralegal specialist assigned full time, as well as five civil attorneys and one criminal attorney assigned part time for LDA compliance work. In addition, USAO officials stated that the USAO participates in a program that provides Special Assistant United States Attorneys (SAUSA) to the USAO. Some of the SAUSAs assist with LDA compliance by working with the Assistant United States Attorneys and contract paralegal specialist to contact referred lobbyists or lobbying firms who do not comply with the LDA. USAO officials stated that lobbyists resolve their noncompliance issues by filing LD-2s, LD-203s, LD-2 amendments, or by terminating their registration, depending on the issue. Resolving referrals can take anywhere from a few days to years, depending on the circumstances. During this time, USAO uses summary reports from its database to track the overall number of referrals that are pending or become compliant as a result of the lobbyist receiving an e-mail, phone call, or noncompliance letter. Referrals remain in the pending category until they are resolved. The category is divided into the following areas: “initial research for referral,” “responded but not compliant,” “no response /waiting for a response,” “bad address,” and “unable to locate.” USAO focuses its enforcement efforts primarily on the responded but not compliant group. USAO attempts to review pending cases every 6 months, according to officials. Officials told us that after four unsuccessful attempts have been made, USAO confers with both the Secretary of the Senate and the Clerk of the House to determine whether further action should be taken. In some cases where the lobbying firm is repeatedly referred for not filling disclosure reports but does not appear to be actively lobbying, USAO suspends enforcement actions. USAO monitors these firms, including checking the lobbying disclosure databases maintained by the Secretary of the Senate and the Clerk of the House. If the lobbyist begins to lobby again, USAO will resume enforcement actions. As of February 26, 2015, USAO has received 2,308 referrals from both the Secretary of the Senate and the Clerk of the House for failure to comply with LD-2 reporting requirements cumulatively for filing years 2009 through 2014. Table 3 shows the number and status of the referrals received and the number of enforcement actions taken by USAO in its effort to bring lobbying firms into compliance. Enforcement actions include the number of letters, e-mails, and calls made by USAO. About 52 percent (1,196 of 2,308) of the total referrals received are now compliant because lobbying firms either filed their reports or terminated their registrations. In addition, some of the referrals were found to be compliant when USAO received the referral. Therefore no action was taken. This may occur when lobbying firms respond to the contact letters from the Secretary of the Senate and Clerk of the House after USAO has received the referrals. About 48 percent (1,101 of 2,308) of referrals are pending further action because USAO was unable to locate the lobbying firm, did not receive a response from the firm, or plans to conduct additional research to determine if it can locate the lobbying firm. The remaining 11 referrals did not require action or were suspended because the lobbyist or client was no longer in business or the lobbyist was deceased. LD-203 referrals consist of two types: LD-203(R) referrals represent lobbying firms that have failed to file LD-203 reports for their lobbying firm; and LD-203 referrals represent the lobbyists at the lobbying firm who have failed to file their individual LD-203 reports as required. As of February 26, 2015, USAO had received 1,551 LD-203(R) referrals and 2,745 LD-203 referrals from the Secretary of the Senate and Clerk of the House for lobbying firms and lobbyists for noncompliance with reporting requirements cumulatively for calendar years 2009 through 2014. LD-203 referrals may be more complicated than LD-2 referrals because both the lobbying firm and the individual lobbyists within the firm are each required to file a LD-203. However, according to USAO, lobbyists employed by a lobbying firm typically use the firm’s contact information and not the lobbyists personal contact information. This makes it difficult to locate a lobbyist who may have left the firm. USAO reported that, while many firms have assisted it by providing contact information for lobbyists, they are not required to do so. According to officials, USAO has difficulty pursuing LD-203 referrals for lobbyists who have departed a firm without leaving forwarding contact information with the firm. While USAO utilizes web searches and online databases including LinkedIn, Lexis/Nexis, Glass Door, Facebook and the Sunlight Foundation websites to find these missing lobbyists, it is not always successful. When USAO is unable to locate lobbyists because it does not have forwarding contact information to find a lobbyist who has left a firm, USAO has no recourse to pursue enforcement action, according to officials. Table 4 shows the status of LD-203 (R) referrals received and the number of enforcement actions taken by USAO in its effort to bring lobbying firms into compliance. About 46 percent (714 of 1,551) of the lobbying firms referred by the Secretary of the Senate and Clerk of the House for noncompliance from 2009 through 2014 reporting periods are now considered compliant because firms either have filed their reports or have terminated their registrations. About 54 percent (836 of 1,551) of the referrals are pending further action. Table 5 shows that as of February 26, 2015, USAO had received 2,745 LD-203 referrals from the Secretary of the Senate and Clerk of the House for lobbyists who failed to comply with LD-203 reporting requirements for calendar years 2009 through 2014. It also shows the status of the referrals received and the number of enforcement actions taken by USAO in its effort to bring lobbyists into compliance. In addition, table 5 shows that 44 percent (1,211 of 2,745) of the lobbyists had come into compliance by filing their reports or are no longer registered as a lobbyist. About 56 percent (1,525 of 2,745) of the referrals are pending further action because USAO was unable to locate the lobbyist, did not receive a response from the lobbyist, or plans to conduct additional research to determine if it can locate the lobbyist. Table 6 shows that as of February 26, 2015, USAO had received LD-203 referrals from the Secretary of the Senate and Clerk of the House for 3,841 lobbyists who failed to comply with LD-203 reporting requirements for any filing year from 2009 through 2014. Table 6 shows the status of compliance for individual lobbyists listed on referrals to USAO. Table 6 shows that 48 percent (1,861 of 3,841 of the lobbyists had come into compliance by filing their reports or are no longer registered as a lobbyist. About 52 percent (1,980 of 3,841) of the referrals are pending action because USAO could not locate the lobbyists, did not receive a response from the lobbyists, or plans to conduct additional research to determine if it can locate the lobbyists. USAO officials said that many of the pending LD-203 referrals represent lobbyists who no longer lobby for the lobbying firms affiliated with the referrals, even though these lobbying firms may be listed on the lobbyist’s LD-203 report. According to USAO officials, lobbyists who repeatedly fail to file reports are labeled chronic offenders and referred to one of the assigned attorneys for follow-up. According to officials, USAO monitors and reviews chronic offenders to determine appropriate enforcement actions, which may lead to settlements or other successful civil actions. However, instead of pursuing a civil penalty, USAO may decide to pursue other actions such as closing out referrals if the lobbyist appears to be inactive. According to USAO, in these cases, there would be no benefit in pursuing enforcement actions. USAO finalized a settlement in the amount of $30,000 for Alan Mauk & Alan Mauk Associates, Ltd to address failure to file for several years. According to officials USAO is close to finalizing a settlement with another firm for repeated failure to file. We provided a draft of this report to the Attorney General for review and comment. The Department of Justice provided a technical comment, which we incorporated into the draft as appropriate. We are sending copies of this report to the Attorney General, Secretary of the Senate, Clerk of the House of Representatives, and interested congressional committees and members. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to determine the extent to which lobbyists are able to demonstrate compliance with the Lobbying Disclosure Act of 1995 as amended (LDA) by providing documentation to support information contained on registrations and reports filed under the LDA; to identify challenges and potential improvements to compliance, if any; and to describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (USAO), its role in enforcing LDA compliance, and the efforts it has made to improve enforcement of the LDA. We used information in the lobbying disclosure database maintained by the Clerk of the House of Representatives (Clerk of the House). To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and consulted with knowledgeable officials. Although registrations and reports are filed through a single web portal, each chamber subsequently receives copies of the data and follows different data-cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases caused by the differences in data processing. For example, Senate staff told us during previous reviews they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database. As a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance. House staff told us during previous reviews that they rely heavily on automated processing. In addition, while they manually review reports that do not perfectly match information on file for a given lobbyist or client, staff members will approve and upload such reports as originally filed by each lobbyist, even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reasons to believe that the content of the Senate and House systems would vary substantially. For this review, we determined that House disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure (LD-2) reports and for assessing whether newly filed lobbyists also filed required reports. We used the House database for sampling LD-2 reports from the third and fourth quarters of 2013 and the first and second quarters of 2014, as well as for sampling year-end 2013 and mid-year 2014 political contributions (LD-203) reports. We also used the database for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House, both of which have key roles in the lobbying disclosure process. However, we did consult with officials from each office. They provided us with general background information at our request. To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a stratified random sample of 100 LD-2 reports from the third and fourth quarters of 2013 and the first and second quarters of 2014. We excluded reports with no lobbying activity or with income or expenses of less than $5,000 from our sampling frame. We drew our sample from 46,599 activity reports filed for the third and fourth quarters of 2013 and the first and second quarters of 2014 available in the public House database, as of our final download date for each quarter. Our sample of LD-2 reports was not designed to detect differences over time. However, we conducted tests of significance for changes from 2010 to 2014 for the generalizable elements of our review and found that results were generally consistent from year to year and there were few statistically significant changes after using a Bonferroni adjustment to account for multiple comparisons. These changes are identified in the report. While the results provide some confidence that apparent fluctuations in our results across years are likely attributable to sampling error, the inability to detect significant differences may also be related to the nature of our sample, which was relatively small and was designed only for cross-sectional analysis. A Bonferroni adjustment is a statistical adjustment designed to reduce the chance of making a type 1 inferential error, that is, concluding that a difference exists when it is instead an artifact of sampling error. The adjustment raises the threshold for concluding that any single difference is “statistically significant” so that overall the chance of making at least one type-1 error when making multiple comparisons does not exceed a specified level. population value for 95 percent of the samples that we could have drawn. The percentage estimates for LD-2 reports have a 95 percent confidence intervals of within plus or minus 12.1 percentage points or less of the estimate itself. We contacted all the lobbyists and lobbying firms in our sample and, using a structured web-based survey, asked them to confirm key elements of the LD-2 and whether they could provide documentation for key elements in their reports, including the amount of income reported for lobbying activities; the amount of expenses reported on lobbying activities; the names of those lobbyists listed in the report; the houses of Congress and federal agencies that they lobbied and the issue codes listed to describe their lobbying activity. After reviewing the survey results for completeness, we conducted interviews with the lobbyists and lobbying firms to review documentation they reported as having on their online survey for selected elements of their LD-2 report. Prior to each interview, we conducted a search to determine whether lobbyists properly disclosed their covered position as required by the LDA. We reviewed the lobbyists’ previous work histories by searching lobbying firms’ websites, LinkedIn, Leadership Directories, Legistorm, and Google. Prior to 2008, lobbyists were only required to disclose covered official positions held within 2 years of registering as a lobbyist for the client. The Honest Leadership and Open Government Act of 2007 amended that time frame to require disclosure of positions held 20 years before the date the lobbyists first lobbied on behalf of the client. Lobbyists are required to disclose previously held covered official positions either on the client registration (LD-1) or on an LD-2 report. Consequently, those who held covered official positions may have disclosed the information on the LD-1 or a LD-2 report filed prior to the report we examined as part of our random sample. Therefore, where we found evidence that a lobbyist previously held a covered official position, and it was not disclosed on the LD-2 report under review, we conducted an additional review of the publicly available Secretary of the Senate or Clerk of the House database to determine whether the lobbyist properly disclosed the covered official position on a prior report or LD-1. Finally, if a lobbyist appeared to hold a covered position that was not disclosed, we asked for an explanation at the interview with the lobbying firm to ensure that our research was accurate. In previous reports, we reported the lower bound of a 90 percent confidence interval to provide a minimum estimate of omitted covered positions and omitted contributions with a 95 percent confidence level. We did so to account for the possibility that our searches may have failed to identify all possible omitted covered positions and contributions. As we have developed our methodology over time, we are more confident in the comprehensiveness of our searches for these items. Accordingly, this report presents the estimated percentages for omitted contributions and omitted covered positions, rather than the minimum estimates. As a result, percentage estimates for these items will differ slightly from the minimum percentage estimates presented in prior reports. In addition to examining the content of the LD-2 reports, we confirmed whether the most recent LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD-203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports. To determine if the LDA’s requirement for lobbyists to file a report in the quarter of registration was met for the third and fourth quarters of 2013 and the first and second quarters of 2014, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using an electronic matching algorithm that includes strict and loose text matching procedures, we identified matching disclosure reports for 2,659, or 90 percent, of the 2,950 newly filed registrations. We began by standardizing client and lobbyist names in both the report and registration files (including removing punctuation and standardizing words and abbreviations, such as “company” and “CO”). We then matched reports and registrations using the House identification number (which is linked to a unique lobbyist-client pair), as well as the names of the lobbyist and client. For reports we could not match by identification number and standardized name, we also attempted to match reports and registrations by client and lobbyist name, allowing for variations in the names to accommodate minor misspellings or typos. For these cases, we used professional judgment to determine whether cases with typos were sufficiently similar to consider as matches. We could not readily identify matches in the report database for the remaining registrations using electronic means. To assess the accuracy of the LD-203 reports, we analyzed stratified random samples of LD-203 reports from the 30,524 total LD-203 reports. The first sample contains 80 reports of the 9,787 reports with political contributions and the second contains 80 reports of the 20,737 reports listing no contributions. Each sample contains 40 reports from the year- end 2013 filing period and 40 reports from the midyear 2014 filing period. The samples from 2014 allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95 percent confidence interval of plus or minus 9.5 percentage points or less. Although our sample of LD-203 reports was not designed to detect differences over time, we conducted tests of significance for changes from 2010 to 2014 and found no statistically significant differences after adjusting for multiple comparisons. While the results provide some confidence that apparent fluctuations in our results across years are likely attributable to sampling error, the inability to detect significant differences may also be related to the nature of our sample, which was relatively small and designed only for cross- sectional analysis. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available Federal Elections Commission’s (FEC) political contribution database. We consulted with staff at FEC responsible for administering the database. We determined that the data is sufficiently reliable for our purposes. We compared the FEC-reportable contributions reporting on the LD-203 reports with information in the FEC database. The verification process required text and pattern matching procedures so we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. For contributions reported in the FEC database and not on the LD-203 report, we asked the lobbyists or organizations to explain why the contribution was not listed on the LD-203 report or to provide documentation of those contributions. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist’s LD-203 report. We did not estimate the percentage of other non-FEC political contributions that were omitted because they tend to constitute a small minority of all listed contributions and cannot be verified against an external source. To identify challenges to compliance, we used a structured web-based survey and obtained the views from 93 different lobbying firms included in our sample on any challenges to compliance. The number of different lobbying firms total 93 and is less than our sample of 100 reports because some lobbying firms had more than one LD-2 report included in our sample. We calculated our responses based on the number of different lobbying firms that we contacted rather than the number of interviews. Prior to our calculations, we removed the duplicate lobbying firms based on the most recent date of their responses. For those cases with the same response date, the decision rule was to keep the cases with the smallest assigned case identification number. To obtain their views, we asked them to rate their ease with complying with the LD-2 disclosure requirements using a scale, of “very easy,” “somewhat easy,” “somewhat difficult,” or “very difficult.” In addition, using the same scale we asked them to rate the ease of understanding the terms associated with LD-2 reporting requirements. To describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (USAO) and its efforts to improve its enforcement of the LDA, we interviewed officials from USAO. We obtained information on the capabilities of the system officials established to track and report compliance trends and referrals, and other practices established to focus resources on enforcement of the LDA. USAO provided us with updated reports from the tracking system on the number and status of referrals and chronically noncompliant lobbyists and lobbying firms. The mandate does not require us to identify lobbyists who failed to register and report in accordance with the LDA requirements, or determine for those lobbyists who did register and report whether all lobbying activity or contributions were disclosed. We conducted this performance audit from June 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The random sample of lobbying disclosure reports we selected was based on unique combinations of lobbyist and client names (see table 7). See table 8 for a list of the lobbyists and lobbying firms from our random sample of lobbying contributions reports with contributions. See table 9 for a list of the lobbyists and lobbying firms from our random sample of lobbying contribution reports without contributions. In addition to the contact named above, Bill Reinsberg (Assistant Director), Shirley Jones (Assistant General Counsel) and Katherine Wulff (analyst-in-charge) supervised the development of this report. Amy Bowser, Crystal Bernard, Kathleen Jones, Davis Judson, Stuart Kaufman, and Anna Maria Ortiz made key contributions to this report. Assisting with lobbyist file reviews were Dawn Bidne, Linda Collins, Joseph Fread, Ricky Harrison Jr, Shirley Hwang, Melissa King, Barbara Lancaster, Courtney Liesener, Heidi Nielson, Patricia Norris, Alan Rozzi, Karissa Schafer, Sarah Sheehan, Angela Smith, Natalie Swabb, and Nell Williams. Lobbying Disclosure: Observations on Lobbyists’ Compliance with New Disclosure Requirements. GAO-08-1099. Washington, D.C: September 30, 2008. 2008 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-09-487. Washington, D.C: April 1, 2009. 2009 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-10-499. Washington, D.C: April 1, 2010. 2010 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-11-452. Washington, D.C: April 1, 2011. 2011 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-12-492. Washington, D.C: March 30, 2012. 2012 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-13-437. Washington, D.C: April 1, 2013. 2013 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-14-485. Washington, D.C: May 28, 2014.
LDA requires lobbyists to file quarterly lobbying disclosure reports and semiannual reports on certain political contributions. The law also requires that GAO annually audit lobbyists' compliance with the LDA. GAO's objectives were to (1) audit the extent to which lobbyists can demonstrate compliance with disclosure requirements, (2) identify challenges to compliance that lobbyists report, and (3) describe the resources and authorities available to USAO in its role in enforcing LDA compliance, and the efforts USAO has made to improve enforcement. This is GAO's eighth report under the mandate. GAO reviewed a stratified random sample of 100 quarterly disclosure LD-2 reports filed for the third and fourth quarters of calendar year 2013 and the first and second quarters of calendar year 2014. GAO also reviewed two random samples totaling 160 LD-203 reports from year-end 2013 and midyear 2014. This methodology allowed GAO to generalize to the population of 46,599 disclosure reports with $5,000 or more in lobbying activity, and 30,524 reports of federal political campaign contributions. GAO also met with officials from USAO to obtain status updates on its efforts to focus resources on lobbyists who fail to comply. GAO provided a draft of this report to the Attorney General for review and comment. The Department of Justice provided a technical comment, which GAO incorporated as appropriate. For the 2014 reporting period, most lobbyists provided documentation for key elements of their disclosure reports to demonstrate compliance with the Lobbying Disclosure Act of 1995 as amended (LDA). For lobbying disclosure (LD-2) reports and political contributions (LD-203) reports filed during the third and fourth quarter of 2013 and the first and second quarter of 2014, GAO estimates that 90 percent of lobbyists filed initial LD-2 reports as required for new lobbying registrations (lobbyists are required to file LD-2 reports for the quarter in which they first register). 93 percent could provide documentation for income and expenses. However, 21 percent of these LD-2 reports were not properly rounded to the nearest $10,000 and 6 percent of those reports listed the exact amount. 94 percent filed year-end 2013 reports as required. 14 percent of all LD-2 reports did not properly disclose one or more previously held covered positions (certain positions in the executive and legislative branches) as required. 4 percent of all LD-203 reports omitted one or more reportable political contributions that were documented in the Federal Election Commission database. These findings are generally consistent with prior reports GAO issued for the 2010 through 2013 reporting periods and can be generalized to the population of disclosure reports. Over the past several years of reporting on lobbying disclosure, GAO has found that most lobbyists in the sample rated the terms associated with LD-2 reporting as “very easy” or “somewhat easy” to understand with regard to meeting their reporting requirements. However, some disclosure reports demonstrate compliance difficulties, such as failure to disclose covered positions or misreporting of income or expenses. In addition, lobbyists amended 19 of 100 original disclosure reports in GAO's sample, changing information previously reported. The U.S. Attorney's Office for the District of Columbia (USAO) stated it has sufficient resources and authority to enforce LD-2 and LD-203 compliance with LDA. It has one contract paralegal working full time and six attorneys working part time on LDA enforcement issues. USAO continued its efforts to follow up on referrals for noncompliance with lobbying disclosure requirements by contacting lobbyists by e-mail, telephone, and letter. Also, USAO has finalized a settlement with a lobbyist to resolve multiple instances of noncompliance and is in the process of finalizing another settlement.
Payment card use has grown dramatically since the introduction of cards in the beginning of the 20th century. Hotels, oil companies, and department stores issued cards associated with charge accounts before World War I. In 1950, Diners Club established the first general purpose charge card that allowed its cardholders to purchase goods and services from many different merchants. In the late 1950s, Bank of America began offering the BankAmericard, one of the first general purpose credit cards. Unlike charge cards that require the balance to be paid in full each month, credit cards allow cardholders to make purchases up to a credit limit and pay the balance off over time. To increase the number of consumers carrying the card and to reach retailers outside of Bank of America’s area of operations, other banks were given the opportunity to license Bank of America’s credit card. As the network of banks issuing these credit cards expanded, the network was converted into a membership-owned corporation that became the Visa network. MasterCard began in 1966 as an association of member-owned banks. American Express launched its card network in 1958, and in the 1980s, Discover, then a business unit of Sears, issued Discover card. In 2006, MasterCard became a publicly traded company with a board of directors with a majority of directors that are independent from their financial institution customers. Visa became a publicly traded company in 2008, and its financial institution members became common stockholders with a minority of shares. Today, customers can choose among different types of payment cards. Consumers can use debit cards with a personal identification number (PIN) they enter on payment or with a signature. The payment is transferred directly from the cardholder’s account to the merchant’s account. With a debit card, the payment comes from the cardholder’s checking account. Credit cards allow cardholders to access borrowed funds to make a purchase and generally have a grace period between the purchase of an item and the payment date. Then the cardholder can pay the charges in full or extend the loan and keep making charges to the credit limit. Cardholders who do not pay for the charges in full are assessed finance charges by their financial institution and pay interest on the remaining balance. Credit cards offer cardholders several benefits, including the convenience of not having to carry cash or a checkbook, a convenient source of unsecured credit that allows consumers to finance their purchases over time, an interest-free period to finance purchases if balances are paid on time, improved theft and loss prevention as compared with cash and easier dispute resolution in the event of problems, and an easy record-keeping mechanism that can be useful for budgeting, planning, and income tax preparation. The number of credit cards in circulation and the extent to which they are used has also grown dramatically. The range of goods and services that can be purchased with credit cards has expanded, with cards now being used to pay for groceries, health care, and federal and state income taxes. In 2007, U.S. consumers held more than 694 million credit cards from Visa, MasterCard, American Express, and Discover, and as shown in figure 1, the total value of transactions for which these cards were used exceeded $1.9 trillion, according to data from the Card Industry Directory. Many of the largest issuers of credit cards in the United States are commercial banks, including many of the largest banks in the country. More than 6,000 depository institutions issue credit cards, but over the past decade, the majority of accounts increasingly have become concentrated among a small number of large issuers. Table 1 shows the largest bank issuers of credit cards as of the end of 2008, and their percentage of the total United States credit card market, according to an industry newsletter. In addition, community banks, thrifts, and credit unions each issue credit and debit cards. According to information provided by banking regulators and banking associations, about 75 percent of community banks, 49 percent of credit unions, and 13 percent of thrifts issue credit cards. Merchants’ costs of payment card acceptance involve several different fees that are divided among the parties involved in a credit card transaction. The parties involved in processing credit card transactions vary depending on the network used by the card. The United States has four primary general purpose credit card networks. For the two largest networks—Visa and MasterCard—transactions involve four parties: (1) the financial institution that issued a cardholder’s card, (2) the cardholder, (3) the merchant that accepts the cardholder’s card, and (4) an acquiring financial institution. A merchant that accepts Visa or MasterCard credit cards enters into a contract with an acquiring institution that has a relationship with Visa or MasterCard (or both) to provide card payment processing services. The merchant’s contract with the acquiring institution or its agent specifies the level of services the merchant receives, as well as the merchant discount fee and other fees that will apply to the processing of the merchant’s card transactions. The acquiring institution charges the merchant a merchant discount fee that is established through negotiations. The majority of the merchant discount fee is generally paid from the acquiring institution to the issuing institution in the form of an interchange fee. A merchant does not pay the interchange fee directly. Rather, the Visa or MasterCard network transfers the interchange fee portion of the merchant discount fee from the acquiring institution to the issuing institution. The acquiring institution retains the balance of the merchant discount fee to cover its costs for processing the transaction. Figure 2 illustrates the four parties in a typical credit card transaction and how fees are transferred among the parties. In this example, when a cardholder makes a $100 purchase, the merchant pays $2.20 in merchant discount fees for the transaction. This amount is divided between the issuing institution, which received $1.70 in interchange fees, and the acquiring institution, which receives 50 cents for processing the transaction. For transactions on the other two major card networks—American Express and Discover—generally only three parties are involved: the cardholder, the merchant, and one company that acts as both the issuing and acquiring entities. Merchants that choose to accept these two types of cards typically negotiate directly with American Express and Discover over the merchant discount fees that will be assessed on their transactions. Acquiring institutions provide the means for merchants to accept credit cards, by forwarding the request for authorization through the card network to the cardholder’s issuing institution. The issuing institution authorizes the transaction by verifying that the account is valid and that the cardholder has a sufficient amount of credit for the sale. For merchants accepting cards in their stores, authorization generally occurs automatically through electronic point-of-sale terminals that read cards. Acquiring institutions clear and settle card purchases by providing payment from the issuing institution to the merchant’s account, except for the interchange fees and their own service fees. According to industry estimates, the process takes between 24 and 72 hours for the merchant to receive payment. Acquiring institutions also assume the risks of a merchant defaulting on the promise of goods. For example, if a merchant becomes bankrupt, its acquiring institution is responsible for settling claims with the network and issuers whose cardholders are waiting for goods or services. Interchange fees generally account for the largest portion of the fees for acceptance of Visa and MasterCard credit cards. The card networks set the fees, which vary based on several factors and generally range from 1 to 3 percent of the purchase price. Card network officials told us that they have developed lower rates to encourage certain merchants to accept their cards. Acquiring banks and card network representatives also told us that certain merchants and transaction types are considered more risky than others and pay higher interchange fees for accepting card payments. According to Visa and MasterCard officials, four main factors determine which interchange fee rates apply to a given transaction on their networks: Type of card: Different rates apply to different types of cards. Visa and MasterCard have separate interchange fee rates for general purpose consumer cards, rewards credit cards, commercial credit cards (issued to businesses), and debit cards. Debit card interchange fees generally are lower than those for credit cards. Among credit cards, premium cards such as those offering rewards and commercial cards generally have higher rates than those for standard or traditional cards. Merchant category: Card networks classify merchants according to their line of business. Network officials told us they develop lower interchange fee rates for industries that do not accept cards to encourage merchants in certain categories to accept cards. For example, grocery stores and utilities have lower interchange fees as a special incentive from the networks. Interchange fee rates are higher for merchants in industries such as travel and entertainment, in which network officials report customers spend more with their credit cards, providing the merchant with higher value. Merchant size (transaction volume): Merchants with large volumes of card transactions generally have lower interchange fee rates. Visa categorizes some merchants into three tiers based on transactions and sales volume, with top-tier merchants receiving the lowest rate. Visa and MasterCard officials told us that the lower rates also were designed to promote the use of their cards over other credit cards and forms of payment. Processing mode: Interchange fee rates also vary depending on how card transactions are processed. For example, transactions that occur without the card being physically present, such as on the Internet, have higher interchange fee rates because of the higher risk of fraud. Similarly, swiping a card through a card terminal, rather than manually entering the account number, would lower a merchant’s interchange rate. The swiped transaction provides more information to the issuer authorizing the sale, and issuers and card networks consider such transactions to be less risky because the card was present. Merchants generally learn of changes to their rates for accepting Visa and MasterCard cards through their acquiring institution. Smaller merchants generally receive one or more flat fees (known as a blended rate) for payment acceptance that include both the interchange fee and the acquiring institution’s fee. For merchants with blended rates, the costs of acceptance are uniform for each card type and interchange fee rates may not be disclosed on statements as a separate fee. In contrast, larger merchants generally receive detailed statements from their acquiring institution and card processors, which include interchange fee categories, network fees, and fees from the acquiring institution. These statements reflect “cost-plus” rates, because the acquiring institution provides the merchant with the details of the costs passed on from the network along with the acquiring institution’s own fees. Visa and MasterCard develop and publish interchange rate tables (available on their Web sites) that disclose the default rates that apply to various types of transactions. Visa and MasterCard typically publish new interchange schedules twice a year. Various federal agencies oversee credit card issuers. The Federal Reserve oversees issuers that are chartered as state banks and are also members of the Federal Reserve System. The Office of the Comptroller of the Currency (OCC) supervises card issuers chartered as national banks. Other regulators are the Federal Deposit Insurance Corporation (FDIC), which oversees state-chartered banks with federally insured deposits that are not members of the Federal Reserve System; the Office of Thrift Supervision, which oversees federally chartered and state-chartered savings associations with federally insured deposits; and the National Credit Union Administration, which oversees federally chartered and state- chartered credit unions whose accounts are federally insured. As part of their oversight, these regulators review card issuers’ compliance with the Truth In Lending Act (TILA)—the primary federal law pertaining to the extension of consumer credit—and ensure that an institution’s credit card operations do not pose a threat to the institution’s safety and soundness. The Federal Trade Commission (FTC) generally has responsibility for enforcing TILA and other consumer protection laws for credit card issuers that are not subject to the enforcement authority of other federal regulators. To the extent that the imposition of interchange fees would constitute an anticompetitive or unfair business practice prohibited by the antitrust laws or the Federal Trade Commission Act, the Department of Justice (DOJ) and FTC, respectively, could take measures to ensure compliance with those laws. Interchange fees are the subject of several federal legislative proposals. The Credit Card Fair Fee Acts of 2009, introduced in June 2009, would, among other things, establish a process by which merchant groups and providers of access to credit card networks could negotiate interchange fees and other terms of network participation under an exemption from federal antitrust laws. The interchange fee would be made publicly available. Another proposal would require credit and debit card networks to remove constraints placed on merchants for card acceptance, such as requiring merchants to accept all cards from a particular network, and require issuers to disclose networks’ fees to credit card users. The amounts merchants pay to accept credit cards is increasing, as Federal Reserve data indicate that consumers increasingly use credit cards to make payments, but also because network competition in the credit card market may be contributing to rising interchange fees. As Visa and MasterCard have sought to attract new merchants to accept and issuers to offer their cards, the number of different interchange fee categories has grown. In addition, as the networks compete to attract financial institutions to issue cards on their networks, they may have increased their interchange fees to provide issuers with greater revenue from the fees. However, concerns remain over whether the level of interchange fee rates reflect the ability of some card networks to exercise market power by raising prices without suffering competitive effects, or whether these fees are the result of the costs that issuers incur to maintain their credit card programs. Issuers, particularly smaller issuers such as community banks and credit unions, report relying on interchange fees as a significant source of revenue for their credit card operations, and analyses by banking regulators indicate that card activities traditionally have been among the most profitable types of activities for large banks. The amount of fees that merchants pay for card transactions has been increasing in recent years, in part because of the increasing use of credit cards to make payments. The Federal Reserve recently estimated that the use of both checks and cash have declined, or at least grown more slowly than credit and debit card use, since the mid-1990s as more consumers switched to electronic forms of payment. According to data from the American Bankers Association, since 2005 more than half of total retail transactions have been paid for using cards (either debit or credit). Although the total value of fees that merchants paid for card transactions as well as the total value of interchange fees are not publicly available, economists at the Federal Reserve estimated that the value of interchange fees paid on Visa and MasterCard credit and debit cards has increased substantially, from about $20 billion in 2002 to approximately $35 billion to 45 billion in 2007. As the total amount of interchange fees increased, so did merchants’ total fees for accepting cards. Merchants’ card acceptance costs also have been increasing as a result of rising average interchange fee rates. Visa and MasterCard officials told us that their average effective interchange rates applied to transactions have remained fairly constant in recent years when transactions on debit cards, which have lower interchange fee rates, are included. However, our own analysis of Visa and MasterCard interchange rate schedules shows that the interchange rates for credit cards have been increasing and their structures have become more complex, as hundreds of different interchange fee rate categoriess for accepting credit cards now exist (see table 2). According to our analysis, in 1991, Visa and MasterCard each had 4 standard domestic credit card interchange fee rate categories, but by 2009, Visa had 60 and MasterCard had 243 different rate categories that could be charged to card transactions, although not all of these rates would apply to all merchants. According to card network officials, the increase in the number of rates occurred as different types of merchants and cards were added to their interchange rate schedules. For example, the networks introduced new rates for certain industries that previously had not accepted cards (such as energy utility companies or government agencies) or for new methods of shopping (such as online purchases). In addition to the increase in the number of interchange fee rates, the maximum domestic credit card interchange fee per transaction also has increased, as shown in table 2. While some of the networks’ interchange fee rates remained the same during this time and a few decreased, another reason merchant card acceptance costs are increasing may be that individual interchange fee rates also are increasing. According to our analysis, from 1991 to 2009, 43 percent of the individual Visa rates and 45 percent of the MasterCard rates that prevailed in 2009 had been increased since they were originally introduced. Our interchange fee rate analysis showed that the interchange fee rates that increased the most during this period were for some standard card types. For example, the rate that applied to MasterCard transactions using basic nonrewards credit cards at merchants that would not otherwise qualify for a special rate—called Merit III base—was 1.30 percent in 1991 and 1.58 percent in April 2009—representing a 22 percent increase. A similar rate for Visa—known as CPS/Retail Credit—increased from 1.25 percent to 1.54 percent, or 23 percent, from 1995 to April 2009. In addition, several of the networks’ higher interchange fee rates also increased during this period. For example, both networks’ corporate card (issued to business customers) interchange fee rates increased considerably—Visa by 36 percent and MasterCard by 82 percent. Rates on other cards that had lower-cost incentive rates for sectors that previously did not take cards also increased. For example, MasterCard’s interchange rate for standard credit cards used at a supermarket increased nearly 29 percent, from an introduction at 1.15 percent in 1998 to 1.48 percent in 2009. Analysis by Federal Reserve staff also showed that interchange fee rates have increased, particularly for premium cards that have higher rates than basic cards. As shown in figure 3, the interchange fee costs for Visa’s and MasterCard’s premium cards have increased about 24 percent since they were introduced in 2005. Interchange fee costs for basic credit cards have stayed roughly the same since 2005, with a 3-percent decline for MasterCard and none for Visa. Although limited information about cost trends for accepting cards exists for American Express and Discover, the rates these two networks charge have not generated the same level of concern as those of the other networks, in part because they are less widely used. Information on the rates that American Express and Discover charge merchants to accept their cards is limited; these networks do not publish the rates they charge merchants, but instead generally negotiate these charges with merchants on an individual basis. As discussed previously, American Express and Discover, for large merchants, generally serve as both the issuer and acquirer of their cards, so merchants’ fees for accepting those cards represent their entire merchant discount fee. Representatives from American Express told us that they do not have interchange fees but instead contract directly with merchants for a fixed merchant discount rate for all types of American Express cards. Discover officials told us that Discover is moving from a single rate for each merchant that applies to all of their cards to a tiered interchange fee model, with higher interchange fees for rewards and corporate cards. Although these networks do not make their merchant discount rate information publicly available, a recent survey of 750 small business owners found that merchants with fewer than 250 employees paid an average of 3.2 percent to accept American Express Cards and 2.5 percent for Discover cards, compared with the average merchant discount fee (which includes the interchange fee and acquiring costs) that these merchants reported of 2.3 percent for MasterCard and Visa. According to data provided to us by American Express, the average merchant discount rate for its cards has decreased in recent years, from roughly 3.25 percent in 1990 to 2.56 percent in 2009. The structure of the credit card market is different from that of other markets and could be one reason why merchants’ costs for card acceptance are rising. Economists and other researchers note that credit card networks function differently from most markets because the card market can be considered a “two-sided” market, in which two different groups—merchants and consumers—pay prices for goods or services offered by a producer. Other two-sided markets include newspapers, which charge different prices to consumers who purchase the publications and advertisers that purchase space in the publications. Typically, newspapers offer low subscription rate or per copy price to attract readers, while funding most of their costs from revenue received from advertisers. Charging low prices to encourage larger numbers of consumers to purchase the newspaper increases the paper’s attractiveness to advertisers as a place to reach a large number of consumers, and thus allows publishers to charge such advertisers more. As a newspaper attracts more readers, it can charge higher prices to advertisers. Similarly, card networks use interchange fees as a way to balance demand from both consumers (who want to use cards to pay for goods) and merchants (who accept cards as payment for goods). As with newspapers, the costs to both sides of the card market are not borne equally. To attract a sufficient number of consumers to use their cards, card networks compete to attract financial institutions to issue them, and institutions in turn compete to find additional cardholders. Just as readers have a variety of sources from which they can receive their news, consumers also have a number of different methods (such as cash, check, or credit card) by which they can pay for a good or service. Because of the choices consumers have available, card networks and issuers want to minimize the costs for consumers to carry their cards to encourage greater acceptance and use. In contrast, merchants have less choice about card costs, particularly once a large number of consumers are using a particular network’s cards. Whereas a consumer may not pay any fee or charge for using a card, card networks charge merchants for accepting cards through interchange and other network fees. Consumers’ payment choices, such as using rewards cards with higher interchange fees, also affect merchants’ costs for card acceptance. As a result, some academic researchers have argued that card networks can keep attracting cardholders by offering them increasingly attractive terms while increasing costs to merchants, whose ability to refuse to accept cards is more limited. The concentration of participants in the credit card network market also has raised concerns over competition and pricing. Visa and MasterCard together accounted for about 71 percent of U.S. credit card purchase volume in 2008, American Express for about 24 percent, and Discover for 5 percent, according to an industry newsletter. Some economists and other academic researchers have argued that the large market share of Visa and MasterCard provides these networks with market power—the ability to raise prices without suffering significant negative competitive effects such as lost sales or reduced transaction volume. As more consumers demand to use Visa and MasterCard cards, merchants feel limited in their ability to refuse these cards even as interchange fee rates rise or as consumers increasingly use rewards cards that have higher interchange rates. These researchers cite the low market share for Discover as evidence that a new product has had difficulty breaking into the mature market. With fewer cardholders, the attractiveness of this network’s cards to merchants is reduced. In order for Discover or another low-cost credit card network to enter the market, it has to compete against the dominance of Visa and MasterCard, which already have successfully recruited thousands of financial institutions to issue their cards and millions of merchants to accept them. Card networks face initial fixed costs, including building and maintaining the infrastructure to process transactions and promoting card usage. Many of the economists that study card market issues generally agree that card networks provide a valuable service that benefits issuers, consumers, and merchants. However, some have pointed out that once a network’s initial start-up and coordination costs have been recovered, the justification for charging merchants higher prices for card acceptance is reduced. Competition among networks for issuers also may increase merchants’ card acceptance costs, as networks increase interchange fees. Although greater competition generally produces lower prices for goods and services, some economists have noted that competition among card networks instead increases costs for merchants. To maintain or increase their market share, networks compete for financial institutions to issue their cards, and the revenues that the issuers earn from interchange fees are an incentive to choose to issue one network’s card over another. A recent court ruling increased the potential for competition among networks for issuers. Before 2001, Visa and MasterCard had exclusionary rules prohibiting their member institutions from issuing American Express or Discover cards. In 1998, DOJ initiated a lawsuit charging, among other things, that Visa and MasterCard had conspired to restrain trade by enacting and enforcing these exclusionary rules. The trial court held that Visa and MasterCard had violated section 1 of the Sherman Antitrust Act by enforcing their respective versions of the exclusionary rule. As a result of the court’s decision, an issuer of one of these network’s cards now has the option to issue cards on the Visa, MasterCard, American Express, or Discover network, or a combination of them. Network officials from Visa told us that they actively compete to retain issuers on their network and interchange fees play a role in that effort. Our analysis of interchange fee rate schedules showed that Visa and MasterCard introduced several of their highest interchange fee rates after this decision, which led to a significant increase in the average interchange fee rates for both networks. According to our analysis, 46 percent of the different Visa interchange rates that prevailed in 2009 had been introduced since 2003, and the average of the new interchange rates created by that network since 2003 was 18 percent higher than the average of interchange rates introduced prior to 2003. Similarly, 89 percent of the different MasterCard interchange rates that prevailed in 2009 had been introduced since 2003, and the average of the interchange rates created by that network since 2003 was 11 percent higher. According to analysis provided by the Federal Reserve, Visa and MasterCard introduced higher interchange fee categories in 2005 for premium cards. Visa and MasterCard officials told us that Visa’s “signature preferred” and MasterCard’s “world card” interchange categories were limited only to higher-spending cardholders. Issuers report that the revenue they receive from interchange fees is use to cover a variety of costs in their card programs. Establishing a credit card program by soliciting customers, offering them unsecured cred paying for any resulting credit or fraud losses involves many costs. Representatives from issuers and networks reported that interchange fees represent a value to merchants, as issuers’ credit card programs provide merchants with increased sales and eliminate the need for merchants to create and maintain their own credit card operations or internal credit departments. Among the costs that issuers told us they incur in running their credit card programs were costs related to preventing and address fraud and data breaches; write-offs for credit losses from delinquent or defaulting cardholders; funding costs associated with paying the merch before receiving payment from the cardholder; paying for rewards and other cardholder benefits; and general op of cards and credit card bill statements. erations, including the issuance Although issuers incur costs for offering cards, concerns remain about th extent to which interchange fee levels closely relate to the level of card program expenses or whether they are set high so as to increase issuer profits. In a competitive market, the price of the product and the cost of e producing it would be closely aligned. However, producers with market power—such as monopolists or those offering goods not generally offered by others—have the ability to charge high, noncompetitive prices. Representatives of issuers told us that interchange fees did not directly cover specific costs of establishing and maintaining a credit card program, but were one of several revenue sources for issuers, in addition to interest charges on outstanding balances and cardholder fees (such as a late fee or an annual fee). Representatives from a banking industry consultancy group told us that the allocation of issuers’ revenue varied widely, as some issuers provide more benefits through greater rewards and others by offering more credit. For example, issuers derive revenue from cardholders who pay interest charges and other fees on their outstanding balances. However, issuers may receive little to no revenue from cardholders who pay off their balances on time. Representatives from large and small issuers told us that interchange fees provide them with income that covers the costs of providing short-term credit during the grace period and rewards benefits to those cardholders who do not pay interest charges or other fees. Representatives of credit unions and community banks reported that revenue from interchange fees allowed them to cover expenses rel offering credit cards and compete with large issuers to offer their customers credit cards. According to data provided by the Indep Community Bankers of America, the interchange fee portion of community banks’ credit card revenue varies widely, with some receiving little income from interchange fees because of inactive cards and others receiving nearly all of their income from interchange fees. Staff from thi organization told us that they have contracted with a vendor processor that provides card processing for many of their members. They report that of the 689 community banks that issue credit cards through this vendor, the average amount of revenue from interchange fees represented about 43 percent of these institutions’ total card revenues. Credit unions and community banks had a higher portion of cardholders who did no carry a balance or incur penalty fees, according to representatives of financial institutions, so they had to rely more on interchange fee revenues than revenues from fee income and interest payments. Representatives o rds the smaller issuers also reported that they felt they had to offer rewa programs to compete with the larger issuers, but for some, rewards programs did not constitute a majority of their expenses. In addition, two of the credit unions with which we met outsourc maintenance of a card program to a third party. ed the issuance and Information on the amount of revenues larger financial institutions coll from interchange fees and how those revenues compare with their costs o card operations and rewards programs is limited. We were not able to obtain data from the largest card issuers about their revenues, profits, or expenses to compare interchange fee revenues with expenses. Howe ver, industry sources indicate that credit card issuers have derived a signific amount of revenue from interchange fees. According to an industry newsletter, in 2007, roughly 20 percent of Visa and MasterCard issu card-related revenue—roughly $24 billion—came from interchange fees, while their total costs (for costs of funds, charge-offs, operations, marketing, and fraud) were about $90 billion, and their profits afte $18.3 billion. issuers pass on increased revenue from interchange fees to their cardholders in the form of greater rewards. He reported that from 200 2008 one large credit card issuer provided an increasing portion of its interchange fee income as rewards to its cardholders and that Visa’s traditional rewards, premium rewards, and superpremium interchange fee categories had minimum cardholder rewards programs associated with them. Beginning in March 2008, national and state-chartered banks had to submit data on revenues from credit and debit card interchange fees quarterly reports on their condition and income (Call Reports) when such amounts exceeded certain thresholds banking regulator cautioned that they were still reviewing the consistency of the data provided on these forms. r taxes According to an economist working for the largest issuers, . However, officials from one Large issuers of credit cards traditionally have been among the most profitable banking institutions. Although credit card issuers have suffered losses in the recent economic downturn, a June 2009 Federal Reserve report points out that for large credit card banks, credit card earnings have been consistently higher than returns for all other commercial bank activities, as shown in figure 4. shows that Recent analysis by FDIC also shows that credit card lending remains a profitable business for credit card issuers, credit card lending remains a profitable business for credit card issuers, and an FDIC official recently testified that credit card lending has been and an FDIC official recently testified that credit card lending has been one of the most profitable business lines for some time. However, FDIC one of the most profitable business lines for some time. these institutions experienced some of the highest rates of charge-offs. Statement of Martin J. Gruenberg, Vice Chairman, Federal Deposit Insurance Corporation. “Credit Cardholders’ Bill of Rights: Providing New Protections for Consumers” before the Subcommittee on Financial Institutions and Consumer Credit of the Financial Services Committee, U.S. House of Representatives. 110th Cong., 2nd sess. 2008. FDIC Quarterly Banking Profile, second quarter 2009. Some consumers have benefited from competition in the credit card market, as those using credit cards enjoy lower fees and interest rates and greater rewards. Benefits to cardholders vary depending on how they use their cards; those with credit card debt accrue finance charges and may pay additional fees. However, consumers who do not use credit cards may be made worse off by paying higher prices for goods and services, as merchants pass on their increasing card acceptance costs to their customers. Although most cards in the United States are issued by the largest issuers, consumers have a wide variety of issuers and cards to choose from. According to the Federal Reserve, over 6,000 depository institutions issue credit cards. These issuers range from some of the very largest financial institutions—such as Bank of America and Citigroup—to credit unions and community banks that range in size and can be small. While there are estimated to be thousands of credit unions and community banks that issue cards, the 10 largest issuers account for about 92 percent of all outstanding credit card debt. Given the large number of issuers and widespread use of credit cards by consumers, issuers compete to obtain new customers and retain existing ones. According to the Survey of Consumer Finances, in 2007, 73 percent of U.S. families had a least one credit card. Issuers typically use mail solicitations to market their card products—mailing 3.8 billion solicitations in 2008—but representatives from one large issuer we spoke with told us that they can also advertise online and at their branch locations. Issuers target their marketing efforts depending on cardholders’ payment and use patterns. Cardholders paying their balance in full each month (convenience users) and high-volume card users may be drawn to cards that offer rewards programs, while those cardholders carrying a balance may be more likely to choose a card that offers a low interest rate. As competition for cardholders has intensified, issuers increasingly have turned to rewards programs to attract and retain cardholders. As discussed earlier, these programs are funded in part by interchange fee revenues. According to an industry study, 71 percent of cardholders held a rewards card in 2008. Representatives of all of the large issuers with whom we spoke told us rewards cards represent a significant portion of the cards they offer and are designed with incentives to increase their use by cardholders. One issuer’s staff told us that all of their bank’s traditional credit cards that are in active status have a rewards component and they believe that rewards programs help them to build customer loyalty and to retain existing cardholders. A representative of another large issuer stated that about 51 percent of its cardholders have rewards cards, representing about 81 percent of total volume for the issuer, and a representative of another issuer reported that approximately 50 percent of its cards earn points that can be redeemed for rewards or other benefits. Visa and MasterCard also now allow issuing institutions to upgrade cardholders with basic cards to those with rewards without reissuing the card. Competition among issuers also can lower many cardholders’ credit card costs. For example, issuers compete with one another by offering cards with low interest rates. Representatives from one of the large issuers with whom we spoke stated that they typically offer these types of benefits to appeal to cardholders. For example, many issuers offer low temporary rates to transfer existing card balances to a new account. In 2006, we reported that many issuers attempted to obtain new cardholders by offering low, even zero, introductory interest rates for limited periods. According to an issuer representative and industry analyst we interviewed at that time, low introductory interest rates were necessary to attract cardholders in a competitive environment in which most consumers who qualify for a credit card already have at least one. In addition to offering low interest rates, issuers compete by offering cards that have no annual fees and low fees for other actions associated with usage. These lower rates and fees can decrease the cost of using credit cards for some cardholders. However, in recent months, changes in the economy and the passage of the CARD Act have led many issuers to “reprice” their credit card accounts by altering the rates, fees, and other terms that apply to cardholders’ cards. For example, increasing numbers of consumers have been falling behind on their credit card payments. In the first quarter of 2009, the 30- day credit card delinquency rate reached its highest rate—6.6 percent—in 18 years. Provisions in the CARD Act—most of which will take effect in February 2010—limit the ability of card issuers to increase the interest rates, fees, or finance charges on outstanding balances to the conditions set forth in the act. According to an industry publication, in anticipation of the law taking effect, some issuers have increased the interest rates they charge on consumer purchases as well as some of the fees associated with card usage, such as balance transfer fees. According to Federal Reserve data, interest rates on consumer credit card accounts have been increasing steadily each quarter since the second quarter of 2008, when rates were 11.88 percent. Since that time, rates have increased to 13.32 percent in the second quarter of 2009. Increased merchant costs for card transactions may lead to higher prices for noncardholding consumers. As discussed earlier, merchants have faced increased costs from accepting credit cards in recent years, in part because of the increasing number of customers using credit cards and in part because of the increase in average interchange fees, particularly from higher-fee rewards cards. Representatives of merchants we interviewed told us that they generally passed any increased costs—including the costs of accepting credit cards—on to their consumers through higher retail prices. Thus, all their customers may be paying higher prices for goods and services, whether using a credit card or not. Economists disagree whether the increased use of rewards cards further increases costs for merchants. Some researchers state that the increased use of rewards cards, which have higher interchange fees, increase costs for merchants, as their customers switch from paying with cash, check, and basic credit cards to using rewards cards. As a result all customers, including cash and check users, may pay higher prices for goods and services. In addition, some economists have stated that because rewards cardholders do not pay for rewards cards directly, they use their rewards cards more for transactions than if their cards included explicit costs. For example, one study in which consumer survey data were used found that cardholders with rewards cards were more likely to use their cards exclusively than cardholders without rewards cards. However, the extent to which merchants increase retail prices to account for the costs associated with accepting cards is difficult to measure. Some researchers argue that consumers—even those paying cash or by check— may still be better off because of widespread card use. While merchants may pay more out of pocket to accept credit cards than they do for other forms of payment, credit cards also provide significant benefits to merchants, such as lower labor and processing costs and increased sales. For example, one of these researchers has theorized that the benefits of increased credit card use may lower merchants’ costs, which in turn would allow them to sell their goods and services more cheaply. Merchants can receive a wide range of benefits from accepting credit cards, and some merchants we interviewed reported receiving increased sales from credit cards. However, representatives of the large merchants with whom we spoke with said that their increased payment costs had not led to a corresponding increase in sales, particularly for cards with higher interchange fees such as rewards cards. In addition, these merchants reported that their ability to negotiate lower payment costs was limited by their inability to refuse popular network cards as well as network rules for card acceptance, which, among other things, preclude merchants from adding surcharges for credit card payments or rejecting higher-cost cards. Finally, although interchange fees are not regulated at the federal level in the United States, concerns regarding card costs have prompted DOJ investigations and private lawsuits, and authorities in more than 30 countries have taken or are considering taking actions to address such fees and other card network practices. Merchants can receive a variety of benefits—primarily, increased sales— from accepting credit card payments. Increased sales can occur for several reasons. First, a customer without sufficient cash can make a purchase immediately using a credit card, resulting in a sale that the merchant otherwise would not have made. In addition, some research has shown that, when not paying with cash, some customers may purchase more than they would have otherwise. These researchers say the additional spending occurs because paying with a card can feel less like true spending to some consumers than paying with cash. Representatives of card networks and issuers also report that consumers with rewards cards spend more because they factor in the price of the rewards they receive from their issuing institution, which also results in greater sales than the merchants would otherwise have made. One researcher noted that the amount of additional sales merchants receive from accepting credit cards can be greater for certain businesses. Customers more commonly use credit cards for large purchases and for purchases that they might not be able to pay off right away. Several of the merchants we interviewed have seen some evidence that accepting credit cards has increased their sales. For example, representatives from a national discount store and a small home improvement store told us that customers paying with credit cards spent more than customers paying with cash or debit cards. A dentist told us that his patients spent more on procedures because of the credit that their cards provided. Representatives of the card networks also told us that they also are able to increase merchant sales by providing merchants with customer information to enhance their marketing efforts. For example, representatives from one card network told us that they have specific staff tasked with organizing marketing campaigns targeted to particular merchants to increase the sales these merchants make from this network’s cardholders. For example, if cardholders purchased particular items, their next billing statement would include offers for additional discounts on future purchases at specific merchants that accept their card that also sell such items. The networks reported that through their respective databases, they help merchants identify and better understand their prospective, current, and lapsed customers and employ a variety of niche marketing approaches that ultimately serve to increase sales. Accepting credit cards also allows merchants to make sales on credit at a generally lower cost than operating their own credit program. As noted previously, individual merchants originally offered credit cards that could be used only at their stores, but many such merchant programs have been discontinued now that cards issued by third parties—banks, credit unions, and thrifts—are available. Card network and issuer staff told us that credit cards allow merchants to obtain sales from customers that want to finance their purchases over time without the merchants having to incur the costs involved with offering credit. For example, they said merchants avoid the costs of credit losses, debt collection, credit quality assessment, card production, and statement preparation. Credit card acceptance benefits merchants in other ways. For example, merchants can receive faster and more certain payment from customers using cards than from customers using other means, such as checks. Receiving the funds from a check can take as long as 5 days, but merchants can receive the proceeds from card payments from their acquiring institution in 1 or 2 calendar days. For example, the dentist we interviewed told us that his credit card payments are credited to his bank account the day they are processed, providing him almost immediate access to the funds. A small flower shop owner told us that she receives faster payments by credit card than from customers to whom she extends credit by sending a bill. Several of the merchant and banking organizations we interviewed also cited the certainty of credit card payments as a benefit to merchants. For example, the home improvement merchant noted that she preferred being paid by credit card to receiving bad checks. Similarly, a sports club owner reported that he prefers the guaranteed payment associated with accepting credit cards to the risks associated with accepting checks. Staff of an association that represents credit unions noted merchants that accept cards have less cash to handle and less risk of employee theft. Staff from a banking association noted that card acceptance reduces the need for merchants to make physical deposits, since card payments are settled directly with their financial institution. Economists have also documented the benefits of guaranteed payments to merchants. Card acceptance also can reduce the time merchants’ customers spend at checkout and can reduce labor costs. For example, representatives of one large merchant told us that their analyses indicated that processing a check payment takes them an average of 70 seconds, processing a cash payment averages 51 seconds, and a credit card payment 32 seconds. Staff from card networks and card issuers told us that the efficiency of card payments has allowed merchants to reduce their staffing, thus saving on labor costs. For example, they noted that credit card customers at gas stations and other retail stores often can pay for purchases without necessarily interacting with an employee. Despite the benefits of payment card acceptance, representatives of several of the large merchants we interviewed reported that their costs of card acceptance have increased disproportionately in comparison with benefits, in large part because of increasing card use. Several of the large merchants we interviewed reported that as a percentage of sales, payment cards are more expensive to process than cash and checks, a fact they explained reflects the technological advances in check processing as well as a competitive market for check-processing services. They also reported that even allowing for the operational and administrative costs associated with processing cash (such as armored cars and losses by theft), credit card interchange fees result in credit card payments being more expensive for them overall. For example, staff from one large retail chain told us that for a $100 transaction, a credit card payment generally cost the company about 14 times as much to accept as cash. Other merchants reported that transaction costs for credit cards were two to four times more than their transaction costs for cash. Representatives from large national merchants also provided us with data showing that sales made with cash and checks have decreased in recent years, while sales made with credit cards— particularly those using high-interchange fee cards—have increased. Although credit cards are supposed to generate increased sales for merchants in exchange for their acceptance costs, representatives of large merchants we interviewed told us that their card acceptance costs had increased faster than their sales. For example, a large home improvement retailer told us that although cards may have increased its sales in the past, this has not been occurring recently. According to its own analysis, the total cost the company has paid to accept MasterCard, Visa, American Express, and Discover cards combined increased by 16 percent from 2002 through 2008; however, sales for those same cards increased by only 10 percent during this period. Representatives of this merchant also told us that they had calculated that for every additional 1 percent their company has paid in card acceptance fees and costs, it has received 0.63 percent in additional sales. An official representing a large convenience store chain said that although he believes that a decade ago people may have spent more with credit cards than with cash or check because of the availability of credit, he no longer thinks that is true. Several of the large merchants that we interviewed attributed their rising card acceptance costs to customers’ increased use of rewards cards. Staff from these merchants all expressed concerns that the increasing use of rewards cards was increasing merchants’ costs without providing commensurate benefits. For example, one large merchant provided us with data on its overall sales and its card acceptance costs. Our analysis of these data indicated that from 2005 to June 2009, this merchant’s sales had increased 23 percent, but its card acceptance costs rose 31 percent. Rewards cards were presented as payment for less than 1 percent of its total sales volume in 2005 but accounted for almost 28 percent of its sales volume by June 2009. During this same period, sales processed on nonrewards cards fell by 43 percent. Several of the other large merchants we interviewed provided data that showed that the proportion of rewards cards presented as payment by their customers also had risen significantly in recent years. For example, representatives from one merchant said about 70 percent of payments on one network’s cards transferred to rewards cards over the past 5 years, representing an increase in rewards cards use of 385 percent since rewards cards were introduced. Further, several of the large merchants also told us that they do not always see correspondingly increased sales from rewards cards compared with other cards. For example, one large merchant provided us with data on its average purchase (ticket) size by payment means. According to our analysis of these data, in July 2005 the average ticket size for rewards cards transactions was around $203, but the average ticket size for nonrewards transactions was about $184—the average rewards card purchase exceeded a nonrewards purchase by $19, or about 10 percent, that month. However, during the 47 months from July 2005 through May 2009 for which data were available, the average ticket size for Visa rewards cards was lower than for nonrewards Visa transactions in 9 months—or about 19 percent of the time. Although representatives from the card networks and issuers provided us with data indicating that rewards cardholders spent more than nonrewards cardholders, their analysis did not demonstrate that rewards cardholders spent more than they would have with other tender types, producing increased sales for merchants. The largest networks and issuers we interviewed provided data showing the total amount of spending by their rewards cardholders exceeded that of their nonrewards cardholders. One card network provided us with data that showed that according to its analyses, its three levels of rewards cards had higher average ticket amounts than its basic cards by over $4, almost $12, and over $26 for the highest level of rewards. However, other factors suggest that attributing increased sales to card use for merchants is difficult. For example, rewards cards generally have been offered to higher-income cardholders. Such cardholders might spend more than the average cardholder generally. Thus higher total spending on rewards cards by individual cardholders or increased ticket sizes for such cards may reflect only that those cardholders spend more in general, and not represent additional sales that a merchant otherwise would not have received. Similarly, higher total spending on rewards cards compared with spending on nonrewards cards could reflect that rewards cardholders tend to consolidate their spending on fewer cards—sometimes onto a single card—in order to maximize their ability to earn rewards. As a result, such cardholders may not be spending more overall but just limiting their payment methods. Furthermore, merchants may initiate other programs to increase sales. For example, representatives from a large national retailer told us that when the company started accepting credit cards, sales did increase, but they attributed this increase to the simultaneous introduction of promotions for their new products, and they did not feel that credit card acceptance added any proven incremental sales volume. This spending also may not represent additional sales to merchants. For example, some merchant officials and others told us that cardholders can buy only so much gas; they questioned whether cards actually increased gas station sales overall. Similarly, payments for certain other goods or services, such as taxes or business permits, are not likely to result in increased sales. In addition, some of the large retailers and merchant trade associations told us that one of the reported benefits of credit card acceptance— guaranteed payment—was not always provided to merchants. These representatives also noted that merchants received significant amounts of chargebacks—in which the card issuer subtracted amounts previously credited to the merchant. Such chargebacks can occur either because of fraud or when the customer alleges that the goods were not as described or were never received. However, some of the merchants noted that researching such instances to have the charged amount reinstated is a labor-intensive process. As a result, some told us they had established minimum amounts under which they would not attempt to research and justify a charge. According to data provided by one large issuer, chargebacks as a percentage of sales on one network’s cards ranged from 0.1 percent to 0.2 percent from December 2006 through June 2009. Merchants also reported bearing costs for fraud detection and prevention—another reported benefit of credit card acceptance. For example, the increased prevalence of computer hacking incidents in which criminals obtained unauthorized access to cardholder data prompted the card industry to develop the Payment Card Industry Data Security Standard. According to merchants, the card networks have also mandated compliance with these standards, and merchants that fail to meet them are subject to higher interchange fees and fines of $25,000 or more. Although merchant officials acknowledged that such standards are necessary, they noted that they have had to incur significant expenses to comply with them. For example, representatives from one large merchant told us their company spent around $20 million initially to become compliant with these data security standards and has had to spend about $4 million annually to maintain that certification. Officials from another large retailer said their company also has spent millions of dollars becoming compliant with these standards. However, they said that their company has advocated increased data security for years, but noted that instead of just increasing costs for merchants to secure card information, the card networks should be developing payment types that are more secure. For example, several merchants we interviewed noted that other countries are moving away from cards that store sensitive data on magnetic stripes on the cards, which can be duplicated by criminals to create counterfeit cards, and instead are implementing cards that require the user to enter a PIN, which merchant representatives told us is more secure. The small merchants we interviewed generally had mixed views about interchange fees and the overall cost of card acceptance. Some merchants, such as the owners of a private golf course and a flower shop, respectively, chose not to spend as much time examining their payment costs because the costs had remained either relatively constant or had not risen significantly in recent years. Finally, a few merchants noted they considered these fees as simply part of the cost of doing business. Increased competition for acquiring services provides merchants with considerable choice and opportunities to negotiate and lower some of their card acceptance costs. As noted previously, the merchant discount fee that merchants pay to acquiring institutions has two components: the interchange fee—which represents the bulk of the total discount fee—and the processing costs. Hundreds of financial institutions and other firms compete as acquirers to provide card-processing services. Staff from merchants, issuers, and card networks told us that the acquiring market is competitive. According to a 2007 report published by the Federal Reserve Bank of Philadelphia, approximately 1.4 million merchants change acquiring institutions each year. This report stated that this is the result of the merchants seeking lower prices for acquiring services and better services. According to acquirers and merchants we interviewed, acquirers provide customized acquiring services based on processing volume. Acquirers attract new merchant clients by pricing their services competitively and offering a variety of services to meet merchants’ needs. The competition among acquirers gives merchants the opportunity to choose among competing acquirers and negotiate lower costs. Merchants of varying sizes that we interviewed reported that they have multiple acquiring institutions and processors competing for their business and have been able to successfully decrease the acquiring fee portion of their merchant discount fees in recent years. For example, several large merchants told us they periodically issue requests for proposal soliciting bids for acquiring services and receive several responses. However, some of the largest merchants told us their choice of firms that can provide them with adequate cost-processing services is generally limited to only some of the largest providers of acquiring services. Small merchants may choose among numerous firms for processing services, including their own financial institutions. Eight of the nine small merchants we interviewed reported getting solicitations—some frequently—for their acquiring business or have shopped their acquiring business. Several of the small merchants with which we spoke used third- party processors for electronic payments. Merchants formed these business relationships with acquirers and processors through their own research, through agreements with their financial institutions, and through direct solicitations. Also, small merchants can find competitive providers on the Internet; for example, a warehouse store partners with a third-party processor to provide this service to small merchants. Although merchants have reported success in negotiating their acquiring costs, several of the merchants we interviewed told us that their ability to lower their interchange fee costs was limited. These merchants told us they generally paid the rates listed in the Visa and MasterCard networks’ default interchange fee schedules. Although the ability to refuse to accept Visa and MasterCard should provide merchants with the leverage to negotiate lower interchange fees, merchants reported that they could not refuse to take such cards because of customer demand. For example, several merchants told us that if they did not accept credit cards from Visa or MasterCard, their sales would decrease and they would lose business to competitors that did accept those cards. Merchants told us that without the ability to refuse to take the two largest networks’ cards, their attempts to negotiate lower fees for these networks’ cards generally have not been successful in obtaining meaningful reductions in their interchange fees. According to staff from Visa and MasterCard, their networks are willing to negotiate with merchants. For example, officials from one network told us that their network has negotiated with merchants with sales that represent 26 percent of its overall processing volume. Only one of the large merchants we interviewed told us that the company had received a limited and temporary reduction in its interchange fee costs as a result of negotiations with Visa or MasterCard following the settlement of a lawsuit. Two of the merchants we interviewed told us that they could receive reductions in interchange fee rates on one network’s card if they did not accept another network’s card. Other merchants told us that such negotiations were difficult for their businesses, because they had limited control over which type of credit card a customer would choose to use for a purchase. Merchants we interviewed told us that other opportunities that Visa, MasterCard, and their issuers offered to merchants to reduce interchange fees generally have had limited success. For instance, merchants can create a co-branded card. In exchange for promoting the co-branded card, the merchant could receive compensation from the issuer or network or reduced interchange fees. However, merchants we interviewed told us that they have had limited success with co-branding because they had difficulty encouraging their own customers to switch to these cards. For example, representatives from several grocery store chains said that they have had difficulty getting customers with six to eight credit cards in their wallet to add an additional one for sales in their stores. They said that they would have to offer customers rewards to compete for purchase volume with the other cards. In addition, an owner of a convenience store chain started offering a co-branded card in 2002. He said that his stores issued a total of 2,500 cards in 7 years although the issuing financial institution anticipated that the convenience store would issue 10,000 cards annually; he told us that the rebates these cards offered—a 2-percent rebate at their stores, 1 percent on purchases elsewhere—were not competitive to his customers. Similarly, officials from a large national retailer told us that less than 1 percent of their sales were on their co-branded card. Smaller merchants we interviewed generally did not have relationships with issuers and networks. Representatives of issuers told us that the fact that merchants chose to enter into co-branded relationships was evidence that merchants receive greater sales and value from these programs. In contrast to Visa and MasterCard, American Express and Discover generally act as their own acquirers and negotiate directly with the merchants accepting their cards. For example, representatives from American Express told us they negotiate a merchant discount rate directly with merchants for 1 to 5 year terms. While technically each merchant has a separate contract and rate, American Express officials noted that for many types of merchants, a standardized rate applies depending on transaction volume, with higher-volume merchants likely to pay less. The Discover card network conducts direct negotiations with large merchants and sets the merchant discount rates based upon these negotiations rather than publishing a schedule. Both the networks also use third-party acquirers that negotiate with the smaller merchants on behalf of their networks. As discussed previously, these networks have a lower market share than Visa and MasterCard, so merchants have greater ability to refuse to take such cards and a greater ability to negotiate costs and terms. Representatives of two of the grocery store owners we interviewed said that they had greater success in negotiating with American Express and Discover because these networks had lower market share and were trying to gain wider acceptance. Another factor that limits the leverage that merchants have to negotiate lower interchange fees are the card network rules. Each of the major card networks—Visa, MasterCard, American Express, and Discover—has various card acceptance rules that limit the options that merchants have for accepting or denying cards. These rules include the following: No surcharges: Merchants may not impose a surcharge on consumers for the use of credit cards or cards with higher interchange fees. Honor all cards: Merchants are required to accept all credit cards within a network’s brand. No discrimination/differentiation: Merchants may not differentiate between cards within a network or discourage the use of cards within a network. No minimum or maximum charges: Merchants may not impose a price floor or price ceiling on credit card transactions. Preferred treatment: Merchants may not direct consumers away from or to a certain network’s cards. Merchants, some academic researchers, and merchant representatives argue that these rules constrain merchants’ ability to limit the costs of credit card acceptance. For example, without the ability to surcharge for credit cards generally, for a particular network’s cards, or for higher interchange fee cards, merchants are unable to steer customers toward lower-cost forms of payment or recoup some of their costs for higher-cost cards. In addition, without the ability to influence customers’ payment choices, merchants are unable to use their influence with the networks to encourage them to lower interchange and other fees in general, or offer more lower-fee cards. Merchants also told us that the rule requiring them to accept all cards from a network means that they are forced to accept higher-cost cards. Some of the merchants with which we spoke told us that the inability to set minimum credit card purchase amounts meant that they sometimes incurred card-processing costs that made some sales uneconomical. For example, one merchant told us that when a customer pays for a small item, such as a newspaper or a pack of gum, with a rewards card, the costs to process the transaction could exceed profit on the item. The rules provide cardholders with the ability to use any card at any accepting merchant. However, cardholders may not realize that different cards can affect merchant costs to different degrees because merchants cannot take actions that either limit cardholders’ ability to use certain cards or otherwise differentiate among cards. Representatives of issuers and card networks told us that the network rules are designed to promote the wide acceptance of their cards and ensure that their cardholders have a positive experience with the card. For example, they told us that the “honor all cards” rule ensures that merchants will accept all cards from a particular network brand, which ensures that even the cards from smaller, lesser-known issuers such as credit unions and small banks are accepted. Issuers and card network representatives also told us that surcharges are illegal in some states and would diminish cardholders’ expectations for the use of their card. They said that such prohibitions were intended to eliminate bait-and-switch tactics, in which merchants would advertise a low price, only to increase it at the point of sale if the customer used a credit card. Interchange fees are not regulated at the federal level in the United States. The Federal Reserve, under TILA, is responsible for implementing requirements relating to the disclosure of terms and conditions of consumer credit, including those applicable to credit card fees. The various depository institution regulators—Federal Reserve, OCC, FDIC, Office of Thrift Supervision, and National Credit Union Administration— conduct examinations that can address how banks, thrifts, and credit unions that issue credit cards are complying with the TILA card disclosure requirements. However, Federal Reserve staff told us that because interchange fees are paid by merchants’ financial institutions and not directly assessed to consumers, such fees are not required to be disclosed to consumers. Staff from some of the banking regulators told us that they do not review the level at which interchange fees are set during their examinations, but instead review interchange fees as a revenue source for the institutions or the effect they may have on the financial stability of the issuer. Additionally, through the Federal Financial Institutions Examinations Council, the regulators also conduct annual examinations of the card networks to ensure that these entities are adequately managing the operational risks involved with card processing to ensure that such operations are conducted so as to reduce the potential for them to create financial problems for their institutions. Such examinations are done as a part of a program to review the activities of vendors or processors that provide services to depository institutions. Regulator staff told us that their oversight does not involve assessing how the networks set interchange fees. Although no U.S. entity specifically regulates credit card interchange fees, card networks’ practices can be subject to regulatory action by authorities responsible for enforcing competition laws. In the United States, DOJ and FTC have jurisdiction over credit card networks and issuers as part of enforcing U.S. antitrust laws or the Federal Trade Commission Act. As a result, DOJ and FTC can investigate if the imposition of interchange fees or other network or issuer practice constitutes an anticompetitive or unfair business practice prohibited by these laws. The card networks’ practices, including interchange fees, have been the subject of past and current investigations under these laws. As discussed previously, in 1998, DOJ sued Visa and MasterCard for alleged antitrust violations regarding, among other things, how these networks’ rules in effect prevented issuers from issuing cards on their competitors’ networks. The court found that Visa’s and MasterCard’s “exclusionary rules,” which prohibited member institutions from issuing cards by Discover and American Express, were a substantial restraint on competition in violation of the Sherman Act. Although the networks’ imposition of interchange fees was specifically not the subject of the DOJ action, the trial court concluded that Visa and MasterCard had market power in the market for network services, citing large market shares in a highly concentrated market. DOJ officials reported that they currently have another investigation under way in which they have been reviewing whether some of the networks’ rules are anticompetitive. As discussed earlier, these rules include those that prevent merchants from steering customers to other forms of payment, levying surcharges for card transactions, or discriminating against cards by type. DOJ staff told us they have requested information from American Express, Discover, MasterCard, and Visa as part of this investigation. They were not able to provide an estimate for when any formal action resulting from the investigation might occur. Interchange fees and other card network practices also have been the subject of private lawsuits. Since the mid-1980s, various lawsuits alleging problems with interchange fees and other card network practices have been litigated, as described in table 3. As of September 2009, a class action was pending in the United States District Court for the Eastern District of New York, in which merchants claim that interchange fees have an anticompetitive effect that violates the federal antitrust laws. This case is a consolidation of at least 14 separate lawsuits against Visa and MasterCard and their member institutions that had been in four separate districts. While interchange fees are not regulated in the United States, as of September 2009, more than 30 countries have acted or are considering acting to address competition or card cost concerns involving payment cards. Some actions taken by these countries include the following: regulating relationships between merchants and issuers and card networks, such as prohibiting card networks from imposing certain rules on merchants; establishing maximum interchange fees or capping average interchange allowing more institutions to enter the credit card market by changing the requirements to allow more institutions to qualify to act as issuers or acquirers; and conducting investigations into the functioning of the payment card market, including legal antitrust proceedings. For example, authorities in Australia have taken various actions regarding credit card interchange fees and other card network practices since 2003 (see sidebar). In recent years, the European Commission has undertaken proceedings against card networks and set caps on cross-border interchange fees—those applying to card transactions in which a card issued in one country is used to make a purchase in another country—that affect transactions occurring in 31 European countries. In 2007, the New Zealand Commerce Commission initiated proceedings against Visa and MasterCard and issuers of their cards that alleged price-fixing in the setting of fees. This resulted in an August 2009 settlement that included various actions affecting Visa cards, including directing acquirers and issuers to bilaterally negotiate interchange fees instead of having the network set them, removing that network’s “no surcharge” rule and “no steering” rules, and allowing nonbank organizations to participate in the market as issuers or acquirers. Other countries are considering taking actions. For example, in June 2009, Canada’s Standing Committee on Banking, Trade and Commerce reported the results of an investigation into this market, and recommended a number of actions, including permitting merchants to surcharge and steer customers toward low-cost payment methods, requiring merchants to disclose interchange fees to consumers, and prohibiting card networks’ “honor all cards” rules. Concerns about the rising costs of card acceptance for merchants have led to regulatory measures in some foreign jurisdictions and legislative initiatives in the current U.S. Congress. These options generally have involved one or more of the following approaches: (1) setting or limiting interchange fees; (2) requiring their disclosure to consumers; (3) prohibiting card networks from imposing rules on merchants, such as those that limit merchants’ ability to discriminate among different types of cards or levy a surcharge for credit cards; and (4) granting antitrust waivers to allow merchants and issuers to voluntarily negotiate rates. Industry participants and others cited a variety of advantages and disadvantages associated with each option and suggested that the categories of stakeholders—such as merchants or issuers or large merchants versus small merchants—would be affected differently. They also noted that, in some cases, the ultimate impact of the option was not clear. A more detailed discussion of industry participants and others’ views on the merits of each of the options can be found in appendix II. Each of these options is designed to lower merchants’ costs for card acceptance. For example, setting or capping interchange fees would limit the amount of interchange fees charged to merchants. Both RBA and the European Commission have used this approach, and regulators in other countries have worked with Visa and MasterCard to voluntarily reduce their interchange rates. Requiring the disclosure of interchange fees could lower merchants’ fees if consumers changed their behavior after seeing the different costs associated with different forms of payment, and shifted from higher-cost forms of payment such as rewards and other credit cards toward less expensive forms of payment, such as debit cards. The option to eliminate certain network rules, such as the no-discrimination or no- surcharge rule, could allow merchants to either refuse to accept higher- cost cards or receive additional revenue from the consumer to help cover the costs of the interchange fees. For example, a study of surcharging in the Netherlands reported that when merchants placed a surcharge on payment cards, customers there used that form of payment less often. The ability to take these actions could provide merchants with bargaining power to more effectively negotiate with the card networks and issuers over interchange fee rates, even in merchants did not exercise this ability. Refusing to take certain cards issued by a network, such as those with higher interchange fees, could prompt networks and issuers to reduce the prevalence of such cards. Direct negotiation between merchants and issuers under an antitrust waiver also could grant merchants increased bargaining leverage in negotiating interchange fee rates and terms, with a goal of lower costs to merchants. If interchange fees for merchants were lowered, consumers could benefit from lower prices for goods and services, but proving such an effect is difficult, and consumers may face higher costs for using their cards. With lower card acceptance costs, merchants may pass on their interchange fee savings through lower prices to consumers; however, the extent to which they would do so is unclear. As discussed previously, consumers—even those paying with cash and by check—may be paying higher prices because of merchants’ increased costs of interchange fees. By capping interchange fees, RBA estimates that fees to merchants were lower by about 1.1 billion Australian dollars for the period of March 2007 through February 2008, but officials acknowledged that it would be very difficult to provide conclusive evidence of the extent to which these savings have resulted in lower retail prices because so many factors affect such prices at any one time. Moreover, the degree of savings depends on whether or not merchants are increasing their prices because of higher interchange fee costs. Some merchant representatives we interviewed told us that merchants would take different steps to improve customer service if interchange fees were lowered, such as hire more employees. Customers also may not experience lower prices if merchants’ overall costs do not decrease. Several industry participants speculated that if merchants were allowed to refuse higher-cost cards, merchants would lose sales from customers using premium credit cards, who, network and issuer officials told us, spend more than customers using basic credit cards. A study of the Australian reforms by several economists reported that because the actual decrease in merchant costs was very small, merchants may have hesitated to lower prices, especially when their other costs might have been changing. Lowering interchange fee revenues for issuers could prompt issuers to increase cardholder costs or curtail cardholder credit availability. In Australia, issuers reduced rewards and raised annual fees following that country’s interchange fee cap. In addition, with less interchange fee income, representatives of smaller issuers such as community banks and credit unions told us that they likely would not offer rewards cards and therefore would be unable to compete with the larger issuers in the market. One credit union official told us that the credit union could not offer credit cards because of the expense involved with running such a program. In addition, representatives of credit unions and community banks we interviewed said that they benefited from a network system that developed interchange rates to attract both merchants and issuers. Allowing merchants to refuse certain cards or negotiate rates directly with the issuers would eliminate smaller institutions from the process. Representatives of larger issuers told us that with less revenue from interchange fees, they would consider reducing the amount of credit they make available to their cardholders. Australian officials reported that since their reforms were instituted, the number of credit card accounts in Australia has continued to increase and smaller credit unions have remained in the credit card business, albeit with some of their operations outsourced. Each of these options for lowering card fee costs presents challenges for implementation that would need to be overcome. For example, if interchange fees were capped or limited, an oversight agency or organization would have to be designated to determine how and to what level such fees should be set. In addition, economists and other researchers noted that determining an optimal level that effectively balances the costs and benefits among the networks, issuers, merchants, and consumers would be very difficult to do. When Australian officials set their interchange fee cap, they did so based on their assessment of the benefits and costs of different payment methods, but they also told us that many years of data would be needed to determine the effectiveness of the rate cap. If interchange fees were disclosed to consumers, issuers and merchants said that consumers might find the additional information confusing, and some merchants said that their cashiers might not be able to clearly communicate the correct interchange fee for the specific transaction. For the option that would allow merchants to discriminate among cards and add a surcharge for more expensive credit card transactions, merchants said that it would be difficult for them to determine which cards carry higher interchange rates. Finally, the proposal to allow merchants to directly negotiate with issuers raised several issues from the industry participants we interviewed. They said that such negotiations could harm small merchants and small issuers, which do not have as much leverage as larger participants and, in some cases, lack the resources to participate in bargaining sessions. In addition, prudent exercise of this option would require an exemption from federal antitrust laws, which include provisions designed to protect consumers from the consequences of agreements in restraint of trade. DOJ officials have expressed their historical opposition to efforts to create exemptions to antitrust laws, stating that these exemptions should be used only in the rare instances in which a public policy objective compellingly outweighed free market values. Although each option had advantages and disadvantages and difficulties in implementation, removing the networks’ antisteering rules and restricting interchange fees with a cap or other limit were the two options that appeared to receive the most support from the large and small merchants and merchant trade associations with which we spoke. Removing the antisteering rules appears to have various advantages, including providing merchants with the ability to send signals to cardholders about which cards increase merchant acceptance costs, a change that could improve merchants’ leverage in negotiating their payment costs. Merchants’ ability to surcharge or refuse certain cards also could cause cardholders using rewards cards to be more aware of and to bear more of the cost of the rewards from which they currently benefit. This option also may require the least intervention, as merchants could decide whether to add surcharges or refuse certain cards based on their own customer mix. In addition, the potentially anticompetitive effects of these rules are also the subject of the current DOJ investigation and some of the private lawsuits. A significant advantage of capping or limiting interchange fees would be that it would reduce interchange fee costs most directly. The experience in Australia indicates that this option does lower merchant costs and Australian regulators and merchant representatives insist that consumers have also benefited, arguing that merchants in competitive markets generally lower prices. The main challenges to implementing this option are determining the right level of reduction, such as capping interchange rates at a level below that of rewards cards, and tailoring the change to avoid unintended effects on other networks or on smaller issuers. We provided a draft of this report to the Department of Justice, the Board of Governors of the Federal Reserve, the Federal Trade Commission, the Federal Deposit Insurance Corporation, the National Credit Union Administration, the Office of the Comptroller of the Currency, and the Office of Thrift Supervision for their review and comments. Through informal discussions, staff from DOJ, the Federal Reserve, Federal Deposit Insurance Corporation, and the National Credit Union Administration noted the quality of the report. Each of these agencies, as well as the Office of the Comptroller of the Currency and the Office of Thrift Supervision, also provided technical comments that were incorporated where appropriate. Federal Trade Commission staff noted they had no comments on the report. We are sending copies of this report to interested congressional committees and members, the Department of Justice, Federal Trade Commission, Board of Governors of the Federal Reserve, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, the Federal Deposit Insurance Corporation, the National Credit Union Administration, and other interested parties. In addition, this report will be available on our Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at 202-512-8678 or cackleya@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to describe (1) how the fees merchants pay for accepting credit cards have changed over time and the factors affecting the competitiveness of the credit card market, (2) how credit card competition has affected consumers, (3) the benefits and costs to merchants of accepting cards and their ability to negotiate those costs, and (4) the potential impact of various options intended to lower merchant card fee costs. To assess how the fees merchants pay for accepting credit cards have changed over time, we reviewed relevant literature and analyzed available data on interchange fee rates provided by Federal Reserve staff, a large merchant, and a large credit card processor. To describe the factors affecting the competitiveness of the credit card market, including how credit card competition has affected consumers and merchants, we summarized economic and other academic literature. Our literature review built upon key studies to which experts we interviewed referred and which we found by reviewing research databases such as Econlit and through general Internet searches. In our literature review, we sought to summarize a diverse body of literature that described various views on the economics and policy implications of interchange fees. We also interviewed representatives and analyzed data from the four major credit card networks (American Express, Discover, MasterCard, and Visa) and several banking and acquiring trade associations, the members of which include large and small institutions that participate in the credit card system: the American Bankers Association, the Electronic Transfer Association, the Independent Community Bankers of America, the Credit Union National Association, and the National Association of Federal Credit Unions. In addition, we conducted interviews and analyzed data from three of the largest credit card issuers as measured by total outstanding credit card loans, as of December 31, 2007, in the Card Industry Directory and three of the largest credit card acquirers in the United States. We also met with representatives of several small credit unions and community banks that issue credit cards. However, public information on interchange fee revenue, card issuers’ costs, and other quantitative data from card networks and issuers is limited and ongoing litigation limited these entities’ ability to share such information with us. To learn more about merchants’ costs for accepting credit cards, we consulted relevant literature and interviewed and reviewed payment cost data provided by several merchant trade associations, large national retail merchants, and small merchants from two local Chambers of Commerce. We met with officials from the National Retail Federation, the National Grocers Association, the Food Marketing Institute, the National Association of Convenience Stores, the Retail Industry Leaders Association, and the Small Business and Entrepreneurship Council. We met with representatives of 10 of the largest retail merchants in the United States; 8 of these represented 42 percent of the wholesale and retail trade industries listed in the S&P 500 in 2008, as well as one privately owned large company and one publicly traded large company that were not listed on the S&P 500. We selected the wholesale and retail categories because merchants in these industries accept and receive payments from consumers. We also selected small merchants to interview from the Washington, D.C., and Springfield, Virginia, Chambers of Commerce. These merchants represented a diverse group of businesses, including boutique shops, sports clubs, and a health care professional. In addition, we interviewed representatives of other organizations from across the country that accept payments from consumers, including hospital owners, utility companies, and a city government official. Although we selected merchant representatives to provide broad representation of merchant experiences with accepting credit cards, their responses may not be necessarily representative of the universe of small and large merchants. As a result, we could not generalize the results of our analysis to the entire market for merchant experiences with accepting credit cards and paying interchange fees. Where possible, we obtained information from card networks about benefits to merchants. To describe the regulation of interchange fees both in the United States and in other countries, we met with officials from the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, the Federal Deposit Insurance Corporation, and the National Credit Union Administration. We also reviewed studies of the impact of interchange fee reforms in other countries, and interviewed officials from the Reserve Bank of Australia on their actions related to interchange fees because Australia was one of the first countries to act, and sufficient time has passed to allow information on the impact of their actions to be available. To discuss federal antitrust activities, we interviewed officials from the Department of Justice and the Federal Trade Commission and updated our summary of major federal antitrust actions surrounding interchange rates from our 2008 report. Our interviews with industry participants, academics, and regulators also provided us with an understanding of the potential impact of various proposals to lower interchange fees. Quantitative information used in this report was attributed to its source and supported by other corroborative evidence, such as industry sources, federal regulatory data, or interviews. In some instances we were able to do some testing of the data. We believe that these data are sufficiently reliable for the purposes of showing general trends in the credit card industry and merchant experiences with accepting credit card payments. We conducted this audit in Washington, D.C., from May 2009 to November 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Options to address increasing costs of interchange fees include setting or limiting interchange fees, with the intent of lowering costs for merchants and, potentially, consumers. For example, a regulatory agency could cap interchange fees at a single rate or an average rate or work with industry participants—such as networks, issuers, and merchants—to decrease the fees. Some countries have limited interchange fees using such methods. The Reserve Bank of Australia (RBA) has set a limit under which the weighted average of MasterCard and Visa interchange fees must fall. The European Commission recently reached an agreement with MasterCard that limits the average interchange fees that it can charge for cross-border transactions (purchases made in one country using a credit card issued in another) in the European Union. In addition, governments in other countries, such as Mexico, have worked with industry participants to voluntarily decrease interchange fee rates. Merchants could realize lower costs if interchange fees were limited. As discussed previously, many merchants are concerned that their interchange fee-related expenses have increased significantly in recent years. Some merchants in very competitive industries, such as retail and gas stations, told us that because they were unable to increase prices to cover increases in their interchange fees, their slim profit margins had further decreased in recent years. A representative of one franchise told us that even if interchange fees were capped at their current rates, this cap would help ensure long-term profitability. Furthermore, consumers might benefit if merchants passed on their savings through lower prices. However, many industry participants acknowledged that it would be difficult to prove a direct link between lower interchange fees and lower consumer prices. Merchants and representatives of merchants in the retail industry have argued that because retail is one of the most competitive industries, retailers would have to lower prices if their interchange fee-related costs decreased. However, other industries may face pricing constraints. For example, representatives of two utility companies told us that they could not change their prices without first getting approval from their respective regulatory bodies. Several economists have argued that the ability of merchants to pass on their savings from lower interchange fees would depend heavily on the respective merchants’ size and market share, as well as other factors. While the amount of the price reduction might depend on these factors, some studies have demonstrated that a reduction in retailer costs in the competitive gasoline industry can lead to savings for consumers. The Reserve Bank of Australia has estimated that savings to Australian consumers from decreased interchange fees and other interchange reforms likely exceeded 1.1 billion Australian dollars for the period of March 2007 to February 2008, but officials acknowledged that it would be very difficult to provide conclusive evidence of these savings. Others have argued that Australian merchants have not passed savings on to consumers, some of them citing economic literature that argues that such a reduction in merchant costs would not affect retail prices very quickly, even in the context of extensive competition. In contrast, limiting interchange fees could decrease the interchange revenues of issuers, but the impact on issuers would depend largely on the way in which the option is implemented. For instance, if an interchange fee cap were set at a level significantly below the current rates and applied to all interchange fees charged, then all issuers could be affected. But a cap also could be set at a relatively high rate, such as at the maximum rate for standard credit cards, or certain issuers could be excluded from the regulation. Card issuers, especially small issuers, oppose any option that would decrease their interchange fee revenues significantly. According to some industry participants, smaller issuers such as credit unions and community banks rely more heavily on such revenue than large banks, which receive more income from interest and other cardholder fees. Representatives of a credit union association told us that revenue from interchange fees made up over 20 percent of most credit unions’ total card revenues. Several representatives of community banks and credit unions told us that they likely would not be able to offer rewards cards without the revenues they received from interchange fees, and not offering such cards would decrease their overall ability to compete with larger banks. In addition, many industry participants and others agreed that the costs of card acceptance might shift from merchants to cardholders if interchange fees were limited, card surcharges permitted, and interchange revenues decreased. However, they did not agree on whether the shift would be positive or negative. Some researchers have argued that such a shift would lead to more efficient outcomes, because cardholders would pay for the benefits—such as rewards—that they enjoyed. Or, as some economists have noted, cardholders faced with higher costs for using their credit cards might change their behavior by using rewards cards less frequently and opt for alternative payment methods such as debit cards, which could result in lower costs for merchants. Other economists have argued that any consumer savings from lower prices would not be sufficient to offset the negative impacts on cardholders. Representatives of issuers and card networks we interviewed told us that this option would affect cardholders negatively, because issuers likely would respond to their reduced interchange fee income by increasing cardholders’ annual fees and other user fees, decreasing the value of rewards points, and possibly increasing interest rates and decreasing available credit. In Australia, cardholders’ benefits as a portion of spending dropped an average of 23 percent from 2003 to 2007 following the interchange fee reforms. Moreover, a limit on interchange fees could affect merchants negatively if this option led to decreased overall retail sales or available credit. Some industry participants and others pointed to studies that illustrate that consumers tend to spend more when paying with credit cards than when using other payment methods, and that rewards cardholders spend even more than nonrewards cardholders. If consumers shifted from using rewards cards in response to decreased rewards and increased annual and user fees, merchants might realize lower sales revenues overall. Issuers, if faced with lower interchange fee revenues, could decide that some credit card programs were too expensive to maintain and might cut credit to cardholders, including merchants that depend on credit to finance business expenses. For example, the results of the National Small Business Poll indicate that about 74 percent of small businesses use business credit cards and about 39 percent use personal credit cards for business purposes. If the fee limit option were chosen, a challenge for implementation would be setting and maintaining interchange fees at a level that effectively balanced the costs among networks, issuers, merchants, and consumers, which economists and others agree would be very difficult to do. Australian officials set their interchange fee limit using a cost-based approach, which they characterized as practical and meeting legislative requirements. They chose a cap based on the costs that issuers would incur for authorization and processing, fraud and fraud prevention, and funding the interest-free period; the costs of credit losses were not included. Using such an approach would require specialized knowledge of the benefits and costs of different payment methods, some of which may be difficult to measure accurately. In addition, industry participants and others do not agree on which costs should be covered by interchange fees; some issuers and card networks have argued that the fees should cover credit losses, but others have argued that issuers should cover credit losses with their interest revenues. Considerable cost also could be involved for an agency to collect and analyze extensive data from industry participants on interchange-related costs and benefits. A second option to address concerns about interchange fees would require the disclosure of interchange fees to consumers and is intended to increase their awareness of the fees and change their payment behavior. The fees could be disclosed to consumers on sales receipts, on consumers’ card statements, or through generic notices that merchants would post advising their customers about interchange fee costs. Although Visa and MasterCard officials told us that their rules currently do not prohibit merchants from posting their interchange fee costs, other merchants and representatives from a large acquiring bank told us that this option is difficult to implement because merchants are unable to ascertain interchange fee costs until they submit payments for processing. As discussed previously, interchange fees vary depending on the type of card used and the method of processing used. Disclosing interchange fees to consumers could result in lower costs to merchants, which then could pass the savings to consumers, but only if consumers responded to such disclosures by decreasing their use of relatively expensive cards. Proponents contend that consumers deserve to understand interchange expenses facing merchants because consumers could be paying for at least a portion of these fees. If consumers shifted from using relatively costly credit cards to less expensive forms of payment, this would decrease interchange costs for merchants, which in turn could lower prices for consumers. However, many of the industry participants and others with whom we spoke predicted that most consumers would disregard information about interchange fees. Such disclosures could be confusing for consumers. Merchants, issuers, and card networks expressed concern that their customers might not understand the information and might misinterpret the fees listed on the receipt or bank statement as an additional charge, rather than as a component of the total price. Merchants told us that it is very difficult for cashiers to distinguish between the numerous types of debit and credit cards, which have varying interchange rates. Thus, it could be very complicated for a cashier to clearly communicate to the consumer the correct interchange fee for the specific transaction. Additionally, whichever party is responsible for disclosing information about interchange fees to consumers would incur the costs of updating its technology to allow for such disclosures. For disclosure in merchant receipts, merchants would incur the cost of changing their receipts. Issuers have reported that changes to card statements, such as the inclusion of additional disclosures, would generate costs for them. A third option would prohibit card networks (and other entities) from imposing rules on merchants that limit their ability to discriminate among different types of cards, or levy a surcharge for accepting credit cards. The broad intent of this option is to decrease the costs to merchants of accepting cards by allowing them to steer their customers toward less expensive forms of payment. As discussed earlier, card networks generally impose rules on merchants that accept their cards, such as not allowing merchants to add a surcharge or discriminate among cards issued by the same network and prohibiting minimum purchase amounts. For merchants, the primary benefit of removing one or more of the restrictions for card acceptance could be lower costs. If the rule that merchants must honor all cards from a network were relaxed, merchants could refuse to accept cards with high interchange fees. For example, many of the merchants we interviewed with small average purchase amounts, such as convenience store owners, told us that they would receive significantly more benefits from accepting cards if they could steer consumers away from using cards with high interchange fees, especially in cases in which the purchase amount was lower than the total merchant discount fee. Several merchants told us that they would apply a minimum purchase amount for credit cards if they were allowed to, while one small merchant told us that its store already did so. If merchants could levy a surcharge, they also could receive additional revenue to cover their interchange fee-related costs. Although card networks allow merchants to discount for cash purchases, as required by law, some merchants and others have argued that the network rules and state requirements surrounding cash discounting make the practice too complicated. Merchants explained that currently they have to post a cash price and a card price for each item to offer cash discounts, which could confuse or irritate their customers, so an option that allowed merchants to add a surcharge and refuse certain cards would be a more feasible way for merchants to decrease their interchange fee-related costs. This option also could improve merchants’ bargaining power with card networks and issuers. According to some industry participants and researchers, the only leverage that merchants currently have to control their interchange-related expenses is to refuse to accept all of the cards of a given network. As discussed previously, given the large market share of Visa and MasterCard, customers expect to be able to use those cards at a wide variety of merchants, and several of the merchants we interviewed told us they would lose sales if they refused to accept all cards from either of these networks. Some merchants told us that they would have a greater ability to manage their costs with the option to surcharge for credit cards or not accept cards with higher interchange fees. This option also may reduce merchant costs as consumers shift away from higher-cost forms of payment (such as rewards cards) to less expensive forms of payment (such as debit cards). Some industry participants and academic researchers have argued that such a shift would lead to more efficient outcomes such as lower prices for consumers (merchants might lower prices if interchange costs decreased). As mentioned previously, all consumers, even those paying with cash or check, bear the cost of merchants’ costs for card acceptance. Proponents of this option reason that without signals about the costs of different payment mechanisms, such as limited card acceptance, surcharges, or cash discounts, consumers likely have overused cards with relatively high interchange fees. There is some empirical evidence that illustrates that surcharges can change consumers’ choice of payment method. For example, a recent study conducted in the Netherlands, which allows surcharges, showed that consumers there altered their behavior in response to surcharges for debit cards. While 25 percent of consumers surveyed said that they would use their debit cards regardless of an applied surcharge, the majority stated that surcharges affected their choice of payment, with most using cash for sales tickets of less than 15 euros. A survey conducted in Norway also illustrated that consumers there were quite sensitive to consumer prices that reflected payment costs. However, removing merchant rules for card acceptance also could affect cardholders, issuers, acquirers, and card networks. More specifically, cardholders might not be able to use their cards in as many locations and could face higher direct costs for card usage. Credit networks, issuers, and acquirers also argued that consumer protection issues would arise because card users would be treated differently than consumers using other payment methods such as cash. Some industry participants and others were concerned about the ability of merchants operating in less competitive markets to set surcharges at a higher level than would be needed to cover their merchant discount fees, thus resulting in a new stream of revenue for merchants. Also, some noted that consumers would have less choice if small issuers were forced out of the market by reduced interchange revenues. Additionally, if fewer consumers used cards and fewer merchants accepted cards, the overall benefits of the credit card network could decrease. As discussed previously, card networks can bring benefits to both consumers and merchants, and economists have argued that the benefits of a credit card network increase as more consumers use its cards and more merchants accept its cards. However, some economists have reported that the change of rules for card acceptance in Australia has not decreased the use of credit cards significantly and that credit card use has grown by at least 5 percent per year since 1995. If consumers shifted from using cards with higher interchange fees, issuers could see decreased revenues. As mentioned previously, interchange revenues make up a larger percentage of total revenue for small issuers than for large issuers. Representatives of some credit unions and community banks told us that they were concerned that under this option, merchants might discriminate against their cards. These representatives also told us that without sufficient interchange revenues, many credit unions and small banks likely would not be able to issue credit cards. However, representatives of RBA have reported that removing this network rule did not appear to have significantly decreased the number of smaller issuers offering credit cards. They said that some smaller institutions have found it commercially beneficial to outsource some or all of their issuing activities to larger financial institutions or specialist issuers. In such outsourcing arrangements, the cards still carry the small issuer’s name but other providers provide the credit processing, and the credit. To preserve the ability of small issuers to successfully issue cards, some industry participants, including a large merchant, suggested that merchants could be allowed to refuse cards on the basis of costs, but could be prohibited from discriminating against cards on the basis of issuer. The extent to which merchants would take advantage of changes to network rules on card acceptance was unclear. Many merchants of various sizes told us that they would not apply surcharges or refuse certain cards because they feared losing business, or because they thought that this could slow their checkout times. In addition, some merchants told us that they found it nearly impossible to distinguish among different types of credit and debit cards, making it difficult for them to determine which cards they should refuse or to which they should apply a surcharge because of higher interchange rates. In the Netherlands and Australia, where merchants are allowed to levy a surcharge, many merchants have opted not to do so. In Australia, the number of merchants that apply a surcharge for credit cards has been increasing since the practice was allowed in 2003. However, according to a banking research firm that collected data on behalf of the Reserve Bank of Australia, as of June 2009, only about 18 percent of very small merchants, 20 percent of small merchants, 26 percent of large merchants, and 34 percent of very large merchants levied surcharges for credit cards. A study of surcharges on debit cards in the Netherlands found that about 22 percent of merchants added a charge for paying with a debit card (for sales below a certain amount). The results of a national poll by the National Federation of Independent Business indicate that 29 percent of U.S. small businesses who accept credit card payments would apply a surcharge for card payments if their contracts with card networks allowed, and about 13 percent currently have a minimum purchase amount for credit card sales. A fourth option would allow merchants to directly negotiate with card issuers to reach an agreement on interchange fees and terms, which likely would result in lower costs for merchants. Because collective bargaining by commercial groups, such as groups of merchants or businesses, can violate U.S. antitrust laws, an exemption from those laws would be necessary to facilitate such a process. According to DOJ, the granting of antitrust waivers in the United States can be justified only in very rare cases, but participants in specific industries have been granted antitrust waivers, including those in the insurance industry and agricultural cooperatives. According to its proponents, this option would allow merchants more leverage with card networks and issuers in negotiating interchange rates and terms and potentially lead to lower merchant costs. As discussed previously, some have argued that merchants, especially small merchants, are not able to negotiate interchange fees or terms with card networks or issuers, partly because of the large market shares of Visa, MasterCard, and the largest issuers. The large merchants we interviewed told us that their negotiations with card networks have been unsuccessful because they need to accept Visa and MasterCard cards to retain their customers and thus have to pay whatever prices Visa and MasterCard charge. Two of the small merchants we interviewed told us that they were unaware that such negotiations were possible. Some merchants and others have argued that allowing collective bargaining could result in a fairer interchange fee system because card networks would have less ability to set different rates for merchants based on industry type, volume of sales, and other factors; collective bargaining could result in one rate for all merchants. However, as mentioned previously, any option that decreases interchange fees may have opposing effects on different stakeholders (for example, decreased costs but decreased credit availability for merchants or lower prices but higher fees for cardholders). If negotiations resulted in lower interchange fees for merchants, then merchants could pass these savings to consumers through lower prices. However, given that representatives of issuers and card networks told us that issuers would likely respond to decreased interchange revenues by increasing annual fees, decreasing the value of rewards points, and possibly increasing interest rates and decreasing available credit, cardholders could be harmed. In addition, it could be difficult to ensure that small issuers and small merchants benefited from collective negotiations. Representatives of small issuers said that small issuers would not have sufficient market power to negotiate favorable interchange fees with a group of merchants. Furthermore, several of these representatives said they were concerned that merchants could come to agreements with large issuers under which the merchants would accept only the large issuers’ cards. Some merchants with whom we spoke were skeptical about the potential for small merchants to benefit from the collective negotiations with networks and issuers. One small merchant told us that her store would not be able to participate in such negotiations because of limited staff resources. A significant legal barrier to implementing such negotiations is the need to obtain antitrust waivers, which DOJ has argued have been justified only in very rare instances in the United States. Credit card issuers and card network officials expressed concerns about removing antitrust exemptions that are designed to protect consumers from anticompetitive practices. In addition, DOJ officials have expressed their historical opposition to efforts to create exemptions to antitrust laws, stating that these exemptions should be used only in the rare instances in which a public policy objective compellingly outweighed free-market values. Furthermore, in response to a prior proposal that would have allowed for collective negotiations of interchange fees, DOJ officials expressed concern about the role that their agency would play in such negotiations. Cody Goebel, Assistant Director; Michael Aksman; Jessica Bryant-Bertail; Rudy Chatlos; Katherine Bittinger Eikel; Christine Houle; Nathan Gottfried; Yesook Merrill; Marc Molino; Rachel Munn; Barbara Roesmann; Paul Thompson; and C. Patrick Washington made key contributions to this report.
When a consumer uses a credit card to make a purchase, the merchant does not receive the full purchase amount because a certain portion of the sale is deducted to compensate the merchant's bank, the bank that issued the card, and the card network that processes the transaction. The level and growth of these rates have become increasingly controversial. The 2009 Credit Card Accountability, Responsibility, and Disclosure Act directed GAO to review (1) how the fees merchants pay have changed over time and the factors affecting the competitiveness of the credit card market, (2) how credit card competition has affected consumers, (3) the benefits and costs to merchants of accepting cards and their ability to negotiate those costs, and (4) the potential impact of various options intended to lower merchant costs. To address these objectives, GAO reviewed and analyzed relevant studies, literature, and data on the card payment market and interviewed industry participants, including large and small card issuers (including community banks and credit unions), card processors, card networks, large merchants representing a significant proportion of retail sales, and small merchants from a variety of industries, and academic experts. GAO provided a draft of this report to the Department of Justice, the Federal Trade Commission, and federal banking regulators, and we incorporated their technical comments where appropriate. According to Federal Reserve analysis, total costs of accepting credit cards for merchants have risen over time as consumers use cards more. Part of these increased costs also may be the result of how Visa and MasterCard competed to attract and retain issuers to offer cards by increasing the number of interchange fee categories and the level of these rates. Concerns remain over whether the level of these rates reflects market power--the ability of some card networks to raise prices without suffering competitive effects--or whether these fees reflect the costs that issuers incur to maintain credit card programs. Issuers, particularly smaller issuers such as community banks and credit unions, report relying on interchange fees as a significant source of revenue for their credit card operations, and analyses by banking regulators indicate such operations traditionally have been among the most profitable types of activities for large banks. Some consumers have benefited from competition in the credit card market, as cards often have no annual fees, lower interest rates than they did years ago, and greater rewards. However, consumers who do not use credit cards may be paying higher prices for goods and services, as merchants pass on their increasing card acceptance costs to all of their customers. For merchants, the benefits of accepting credit cards include increased sales and reduced labor costs. However, representatives from some of the large merchants with whom we spoke said their increased payment costs outstripped any increased sales. These merchants also reported that their inability to refuse popular cards and network rules (which prevent charging more for credit card than for cash payments or rejecting higher-cost cards) limited their ability to negotiate payment costs. Interchange fees are not federally regulated in the United States, but concerns about card costs have prompted federal investigations and private lawsuits, and authorities in more than 30 countries have taken or are considering taking actions to address such fees and other card network practices. Proposals for reducing interchange fees in the United States or other countries have included (1) setting or limiting interchange fees, (2) requiring their disclosure to consumers, (3) prohibiting card networks from imposing rules on merchants that limit their ability to steer customers away from higher-cost cards, and (4) granting antitrust waivers to allow merchants and issuers to voluntarily negotiate rates. If these measures were adopted here, merchants would benefit from lower interchange fees. Consumers would also benefit if merchants reduced prices for goods and services, but identifying such savings would be difficult. Consumers also might face higher card use costs if issuers raised other fees or interest rates to compensate for lost interchange fee income. Each of these options also presents challenges for implementation, such as determining at which rate to set, providing more information to consumers, or addressing the interests of both large and small issuers and merchants in bargaining efforts.
CARE Act base grants are distributed through a formula that includes HIV/AIDS case counts. Through its HIV/AIDS surveillance system, CDC receives case counts from states, the District of Columbia, and U.S. territories and associated jurisdictions. CDC provides these case counts to HRSA so that HRSA may determine CARE Act formula grant amounts. In fiscal year 2009, HRSA distributed approximately $410 million by formula under Part A of the CARE Act and about $1.1 billion by formula under Part B. Fifty-six metropolitan areas received Part A funds in fiscal year 2009. Twenty-four of the metropolitan areas were classified by HRSA as EMAs and 32 as TGAs. For fiscal years 2008 and 2009, the hold-harmless provision provided that an EMA receive at least 100 percent of the amount it had received as its base grant, including hold harmless funding, for fiscal year 2007. Part B of the CARE Act provides funds to all 50 states, the District of Columbia, the Commonwealth of Puerto Rico, Guam, the U.S. Virgin Islands, and 5 other territories and associated jurisdictions. Part B grants include grants for HIV/AIDS services that are awarded by formula, AIDS Drug Assistance Program (ADAP) grants that are awarded by formula, emerging community grants that are awarded by formula for HIV/AIDS services, Part B supplemental grants for HIV/AIDS services, and ADAP supplemental grants. RWTMA contained a hold-harmless provision that protects funding for Part B base grants and ADAP base grants. For fiscal years 2008 and 2009, a grantee’s total Part B base and ADAP base grants would be at least 100 percent of the total of such grants in fiscal year 2007. One condition of an ADAP grant is that grantees use every means at their disposal to secure the best price available for all products on their formularies. Best prices are determined by the prices that can be obtained under the 340B drug pricing program. Generally, an ADAP purchasing drugs through the 340B program can use a direct purchasing option or rebate option. Under the 340B direct purchase option, ADAPs purchase drugs from drug manufacturers or through a third-party such as a drug purchasing agent. Using the 340B direct purchase option, ADAPs receive the 340B price discount up front. Under the rebate option, ADAPs typically contract with entities such as a pharmacy network or pharmacy benefits manager for purchase of covered drugs. ADAPs later request a rebate consistent with the section 340B price from the drug manufacturers. Due to RWTMA’s requirement that CARE Act formula funding be determined by using name-based HIV/AIDS counts, grantees collecting HIV case counts by code must transition to such a reporting system. Although all grantees had name-based AIDS reporting systems, at the time of RWTMA seven grantees still used code-based HIV reporting systems, while 17 others had recently transitioned to a name-based HIV reporting system. It can take several years to transition to a name-based system because grantees must identify by name each case originally reported by code and then enter each case into the new, name-based reporting system. During the transition period from a code-based to a name-based system, a grantee can report its code-based HIV counts directly to HRSA and have these counts used to determine funding for fiscal years 2007 through 2009. However, in accordance with RWTMA, for each grantee relying on a code- based system, HRSA made a 5 percent reduction in the number of living HIV cases to adjust for potential duplicate reporting in systems that collect code-based case counts, thus reducing the award. RWTMA allowed the use of code-based HIV case counts through fiscal year 2009; it also provided that the status of a grantee under RWTMA for purposes of the transition period may not be considered after fiscal year 2009. Grantees that are transitioning to a name-based HIV reporting system determine when their name-based counts will be used by HRSA to calculate CARE Act formula funding. If the exemption permitting code-based reporting is not extended, it is likely that future fiscal year funding will be based exclusively on name-based counts. A grantee that had not completed the transition from code- to name-based case counts could face a reduction in funding because its name-based HIV reporting system could contain fewer cases than its code-based system. Once a grantee has transitioned to a name-based HIV reporting system, its system must be determined to be operational, as well as accurate and reliable, in order for a grantee’s name-based case counts to be used for funding purposes. To be operational, CDC, in consultation with the grantee’s HIV/AIDS surveillance program and epidemiologist, considers several factors, such as the grantee’s process for ensuring HIV-positive individuals are only counted once and the number of providers and laboratories within the grantee’s jurisdiction diagnosing and reporting HIV positive diagnoses to the grantee. The date CDC allows grantees to report name-based HIV cases to it is considered the date the reporting system becomes operational. Once the name-based HIV reporting system is declared operational, a grantee can determine that its reporting system is accurate and reliable (i.e., its case counts are complete), and can elect to have CDC send HRSA its name-based case counts to determine CARE Act formula funding. A grantee may declare its system to be accurate and reliable anytime after the system has been determined to be operational. However, regardless of the grantee’s assessment, CDC considers a HIV reporting system to be accurate and reliable no later than 4 years after the grantee began collecting name-based HIV case counts. After a grantee determines that its system is accurate and reliable, or after the 4-year period, CDC transmits the HIV case counts to HRSA to be used in the funding formulas. RWTMA required HRSA to cancel funds from grantees’ awards that are unobligated at the end of the grant year, recover funds that had been disbursed, and redistribute these funds to other grantees. These unobligated balance provisions apply to base and supplemental grant awards under Parts A and B. For 2007 grants, HRSA required grantees to estimate and report their unobligated balance to HRSA 60 days prior to the end of the grant year. Grantees were also required to submit a Financial Status Report (FSR) to HRSA 90 days after the grant year ends. Grantees must report their actual unobligated balance on the FSR and the unobligated balance can be updated by the grantee for up to 6 months after the FSR is due. Unobligated balances of grant awards are canceled (with disbursed funds recovered) and then redistributed to grantees who apply for them as additional amounts for supplemental grants under Part A and Part B in the next fiscal year after the unobligated funds were reported. For base grant funds, the impact of unobligated balances differs based on whether the unobligated amount is more than 2 percent of the grant. All unobligated base grant funds must be canceled and recovered by HRSA if the grantee has not been granted a carryover waiver. HRSA takes this step following receipt of the FSR. In addition to having unobligated funds canceled and recovered unless a carryover waiver is granted, grantees with unobligated Part A, Part B, and ADAP base grant funds in excess of 2 percent of the grant award incur a penalty—a corresponding reduction in grant funds for the first fiscal year beginning after the fiscal year in which the Secretary receives the FSR. Grantees are assessed the reduction even if they were granted a waiver. Because FSRs are submitted 90 days after the grant year ends, grants for the next year have already been made by the time HRSA has received the information necessary to determine which grantees have an unobligated balance greater than 2 percent. As a result, there is a 1 year lag time between when the unobligated balance occurs and when the penalty is assessed. For example, if a grantee had an unobligated balance of 3 percent in grant year 2007, the grantee’s FSR would have been filed in grant year 2008, and the dollar amount of the 2007 unobligated balance would have been deducted from the grantee’s award in grant year 2009. Figure 1 shows such a time line for 2007 Part B grant distribution and the unobligated balance provisions. In addition, grantees with unobligated balances of greater than 2 percent of Part A or Part B base grants are ineligible to receive supplemental grants for the year in which the reduction takes place. For Part A grantees this means that they are not eligible to receive Part A supplemental grants. For Part B base grantees this results in ineligibility to receive Part B supplemental grants. For Part B ADAP grantees, an unobligated balance of greater than 2 percent does not result in ineligibility for ADAP supplemental grants. Instead, ineligibility for the ADAP supplemental grant is based on a grantee not obligating at least 75 percent of its entire Part B grant award within 120 days. Table 1 lists the triggers and penalties for the unobligated balance provisions. Most Part B grantees were collecting name-based HIV case counts in their reporting systems as of December 31, 2007, but not all grantees had HRSA use these case counts to determine fiscal year 2009 CARE Act funding. For 47 of the 59 Part B grantees, HRSA used name-based HIV case counts, as provided by CDC, to determine CARE Act funding. The remaining 12 grantees had HRSA use their code-based HIV case counts to determine fiscal year 2009 CARE Act funding. Seven of the 12 grantees—California, the District of Columbia, Illinois, Maryland, Massachusetts, Oregon, and Rhode Island—were collecting name-based HIV case counts as of December 31, 2007, but submitted their code-based case counts to HRSA to determine CARE Act funding. Five of the 12 grantees—Hawaii, Vermont, the Federated States of Micronesia, Palau, and the Republic of the Marshall Islands—were not collecting name-based case counts as of December 31, 2007. Table 2 lists the 12 grantees for which code-based HIV case counts were used for fiscal year 2009 CARE Act formula funding, and the month and year that they began collecting name-based case counts. Each of these 12 grantees could require 4 years from the date they began collecting name-based HIV case counts for their name-based HIV reporting systems to be considered accurate and reliable. However, grantees can determine that their reporting systems are accurate and reliable in less than 4 years. Although 56 of the 59 Part B grantees are currently collecting name-based HIV case counts, some grantees could face a reduction in fiscal year 2010 funding if HRSA uses these counts to determine fiscal year 2010 funding. RWTMA allows grantees to submit code-based case counts to HRSA to determine funding for fiscal years 2007 through 2009; without an extension as part of the upcoming reauthorization, it is likely that HRSA would determine CARE Act funding for fiscal year 2010 using name-based case counts collected through December 2008. However, this could be problematic for some grantees. For example, as of December 2008, Vermont had only been collecting name-based case counts for 8 months. If Vermont’s system is not considered to be accurate and reliable—which could take up to 4 years—but its December 2008 name-based case count is nevertheless used to determine fiscal year 2010 funding, Vermont may not actually receive funding commensurate with the number of HIV/AIDS cases in the state, which is the intended basis for the formula grant. Further, its funding may be a reduction from what it received for fiscal year 2009. CDC has provided assistance for grantees transitioning from a code-based to a name-based HIV reporting system. CDC has provided grantees with technical assistance materials, ongoing assistance via conference calls, and additional assistance upon request. According to CDC, the District of Columbia and Massachusetts were the only Part B grantees that requested additional assistance in transitioning to a name-based system. CDC and HRSA plan to meet with grantee officials from the Federated States of Micronesia, Palau, and the Republic of the Marshall Islands to discuss HIV reporting. Part A hold-harmless funding was more widely distributed among EMAs in fiscal year 2009 than in fiscal year 2004. A larger percentage of EMAs qualified for hold-harmless funding in fiscal year 2009 than in fiscal year 2004, the last year for which we reported this information. About 71 percent of EMAs received hold-harmless funding in fiscal year 2009, while 41 percent received hold-harmless funding in fiscal year 2004. Furthermore, the percentage of the total hold-harmless funding received by the EMA with the most hold-harmless funding was smaller in fiscal year 2009 than in fiscal year 2004. In fiscal year 2009, New York received 52.7 percent of the hold-harmless funding, while in fiscal year 2004, San Francisco received 91.6 percent of the hold-harmless funding. In addition to hold-harmless funding being more widely distributed in fiscal year 2009 than in fiscal year 2004, the total amount of hold-harmless funding provided to EMAs was larger in fiscal year 2009 than in fiscal year 2004. In fiscal year 2009, $24,836,500 in hold-harmless funding was distributed compared to $8,033,563 in fiscal year 2004. Table 3 lists the EMAs and their base grant and hold-harmless funding in fiscal years 2009 and 2004. The range of CARE Act funding differences among EMAs, as measured by funding per case, was smaller in 2009 than in 2004. In fiscal year 2009, EMA base funding per case ranged from $645 to $854, a range of $209. In fiscal year 2004, the funding per case ranged from $1,221 to $2,241, a range of $1,020. The smaller funding range resulted from San Francisco receiving less hold-harmless funding in fiscal year 2009 than in fiscal year 2004. In both years, San Francisco received the most hold-harmless funding per case. However, in fiscal year 2009, San Francisco received $208 in hold- harmless funding per case, while in fiscal year 2004 it received $1,020 in hold-harmless funding per case. Table 4 lists the 24 EMAs and their base grant and hold-harmless funding per case in fiscal years 2009 and 2004. Hold-harmless funding accounted for a larger percentage of San Francisco’s total base funding than it did for any other EMA in fiscal years 2009 and 2004, but the percentage was smaller in fiscal year 2009 than in fiscal year 2004. In fiscal year 2004, hold-harmless funding accounted for approximately 46 percent of San Francisco’s base grant while in fiscal year 2009 hold-harmless funding accounted for approximately 24 percent of San Francisco’s base grant. Table 5 lists the 24 EMAs and their hold- harmless funding as a percent of their base grants in fiscal years 2009 and 2004. In some cases, hold-harmless funding in fiscal year 2009 accounted for a significant portion of a grantee’s Part A base funding. For example, San Francisco, which received the most hold-harmless funding per HIV/AIDS case in fiscal year 2009, received a total of $14,672,553 in base funding. Of this amount, $3,571,649 or 24.3 percent was due to the hold-harmless provision. Because of its hold-harmless funding, San Francisco, which had 17,173 HIV/AIDS cases, received a base grant equivalent to what an EMA with approximately 22,713 HIV/AIDS cases (32 percent more) would have received without hold-harmless funding. A significant portion of the differences in funding per case between San Francisco and the other EMAs results from how the San Francisco case counts are determined. The San Francisco EMA continues to be the only metropolitan area whose formula funding is based on both living and deceased AIDS cases. In February 2006 and October 2007, we reported that the San Francisco EMA was the only EMA still receiving CARE Act formula funding based on the number of living and deceased cases in a metropolitan area. All other EMAs received formula funding based on an estimate of the number of living cases. We showed that the fiscal year 2004 CARE Act formula funding for the San Francisco EMA was determined in part with reference to its fiscal year 1995 funding, which was based on both living and deceased AIDS cases. Because the San Francisco EMA also received hold-harmless funding in fiscal years 2005, 2006, 2007, and 2009, its fiscal year 2009 CARE Act formula funding continues to be based, in part, on the number of deceased cases in the San Francisco EMA as of 1995. Hold-harmless funding for other EMAs does not trace back to 1995 or earlier, a period when CARE Act funding was based on cumulative counts of AIDS cases, both living and deceased. If there had been no hold-harmless provision in fiscal year 2009, most grantees would have received more funding in fiscal year 2009 than they did. Seventeen of the 24 EMAs would have received more funding if there had been no hold-harmless provision and if the $24.8 million that was used for hold-harmless funding had instead been distributed across all EMAs as supplemental grants, that is, in the same proportions as the supplemental grants. The funds used to meet the EMA hold-harmless requirement are deducted from the funds that would otherwise be available for supplemental grants before these grants are awarded. As a consequence, the pool of funds for supplemental grants is reduced by the amount of funding needed to meet the hold-harmless provision. Although 17 EMAs received hold-harmless funding in fiscal year 2009, only 7 (New York, San Francisco, San Juan, West Palm Beach, Newark, New Haven, and Nassau- Suffolk) received more funding because of the hold-harmless provision than they would have received through supplemental grants in the absence of the hold-harmless provision. Sixteen Part B grantees received reduced funding in grant year 2009 because they had unobligated balances over 2 percent in grant year 2007. Grantees we interviewed provided reasons why it is difficult to obligate all but 2 percent of their grant award. Grantees and HRSA said that drug rebates complicate grantees’ efforts to obligate grant funds. Nine states and seven territories and associated jurisdictions were assessed penalties in grant year 2009 because they had unobligated balances over 2 percent in grant year 2007. Arizona, Arkansas, Colorado, Delaware, Idaho, Maine, Nebraska, Ohio, and Pennsylvania were all assessed penalties along with seven of the U.S. territories and associated jurisdictions (American Samoa, Commonwealth of the Northern Mariana Islands, the Federated States of Micronesia, Guam, Palau, the Republic of Marshall Islands, and the U.S. Virgin Islands). Table 6 shows the Part B grant year 2007 unobligated balances. No Part A grantees had unobligated balances over 2 percent. To establish if an unobligated balance penalty applied to a grantee’s 2009 grant, HRSA summed the Part B base and ADAP base unobligated balances to determine if the total was more than 2 percent of the grantee’s total award (Part B base and ADAP base) for grant year 2007. As the provisions were applied by HRSA, Part B grantees can incur a penalty in both their Part B base and ADAP base grants even if the unobligated balance for one of these grants is less than 2 percent as long as the sum of the Part B base and ADAP base balances is greater than 2 percent. HRSA assesses unobligated balance penalties based on the sum of the Part B base and ADAP base unobligated balances. For example, in grant year 2007 Maine had an unobligated balance of more than 2 percent in its ADAP base grant but less than 2 percent in its Part B base grant. The total unobligated funding was 2.4 percent. Because the total was above 2 percent, HRSA reduced both the Part B base and ADAP base grants in grant year 2009. While 16 Part B grantees incurred unobligated balance penalties, some incurred penalties in both their Part B base grants and ADAP base grants and others only had penalties in their Part B base grants because they did not have unobligated ADAP balances. In grant year 2009, six states and one territory were assessed penalties in both their Part B base and ADAP base grants. Because penalties apply to both base grants only when grantees have unobligated balances in both grants, three states and six territories and associated jurisdictions had penalties assessed only on their Part B base grants, because they did not have unobligated ADAP base balances. Part B base funding penalties ranged from $6,433 in Palau to $1,493,935 in Ohio. (See table 7.) ADAP base funding penalties ranged from $26,233 in Maine to $12,670,248 in Pennsylvania. (See table 8.) Pennsylvania’s ADAP base grant penalty accounted for 84 percent of the total amount of penalties for unobligated ADAP funds levied on 2009 grants. In order to calculate the final Part B base and ADAP base grant awards, the penalty attributable to an unobligated balance is applied after other calculations are made, including hold harmless funding. If hold-harmless funds were added after the unobligated balance penalties were applied, hold-harmless funds would negate the effect of the unobligated balance penalties because they would increase funding. For example, Colorado had a preliminary 2009 Part B base grant award of $3,666,928. Under the hold-harmless provision in RWTMA, Colorado was guaranteed Part B base grant funding of $3,683,544. Application of the RWTMA unobligated balance provision reduced the amount of its Part B base grant award (after the addition of hold harmless funding) by $734,240, leaving Colorado with a final Part B base grant award of $3,099,404. In comparison, if hold- harmless funding had been added after the application of the unobligated balance penalty, Colorado would have received $3,683,544, the same as if it had incurred no unobligated balance penalty. Five of the 13 Part B grantees we interviewed had unobligated balances over 2 percent; these 5 grantees told us that they had varying reasons for their unobligated balances, some of which they said were beyond their control. For example, Arizona explained that it had an unobligated balance from its ADAP base grant, in part, because it had a dispute with a vendor it had contracted with to provide prescription drugs to clients. The vendor claimed that it had not been paid for services. According to state officials, to settle the dispute and comply with applicable state rules Arizona had to pay the vendor twice. When the vendor realized that it had been overpaid, it reimbursed Arizona in the amount of $670,000. Arizona received the reimbursement at the end of the grant year. Arizona was unable to spend this amount, leaving it with an unobligated balance of over 2 percent and a subsequent penalty. Grantees we interviewed, which included those that had unobligated balances of over 2 percent and those that did not, explained that they experienced difficulty obligating grant funds within the grant year. Three of the 13 Part B grantees we interviewed explained that they are currently dealing with economic factors such as state hiring freezes, spending caps, and furloughs of staff. One grantee explained that because of economic difficulties, his state has implemented new procedures as a means to limit state spending, including reclaiming state funding balances that are not spent quickly. Because of this new procedure, the grantee must allocate state funding, federal funding, and program income simultaneously, which he finds difficult. One grantee said the existence of the state hiring freeze has limited the amount of grant funding that could be obligated to fund staff positions. The grantee stated that the hiring freeze has been implemented as a means to limit state spending, but the state has imposed the hiring freeze on all programs, including those that receive federal funds. One Part B grantee explained that, while the grantee can to some extent control the contracts that are entered into and types of services that are provided, the grantee cannot control factors that affect the demand for program services. For example, the grantee cannot control the number of people who become infected; those who will lose their jobs and private health insurance and need to receive services supported with grant funds; and changes that occur with Medicaid and Medicare that can affect clients. Additionally, two grantees stated that because the grant awards can arrive after April 1, it can be helpful to carry over funds from the previous year’s grant award so that they can award contracts, rather than delay them until HRSA awards grant funds. These grantees said that they would like to be able to carry over funds without risking a reduction in future funding. One grantee explained that because grant awards are based on a formula and can fluctuate from year to year, it is helpful for the grantee to have funding on hand to maintain consistent service levels even if formula funding is decreased without risking a penalty. Six grantees expressed concern that the level of oversight required to obligate all but 2 percent of their grants leaves them unable to deal with unpredictable situations, such as a contractor going out of business. Six of the 13 grantees we interviewed said that they consider the 2 percent threshold too low, and some suggested that a 5 percent threshold would be more reasonable. Two of these grantees told us that if grantees had to obligate all but 5 percent of their funding, they would have more room to manage their budgets. However, only 2 of the 16 Part B grantees that received penalties for unobligated balances had unobligated balances of less than 5 percent. According to information provided by HRSA, 7 of the 13 Part B grantees we interviewed received drug rebates. In addition, Delaware informed us that they also receive rebates. Four of the eight grantees that received rebates said that the requirement that they spend drug rebates before spending grant funds makes it more difficult for them to obligate all but 2 percent of their grant awards, even though drug rebates are not subject to the unobligated balance provisions. The 27 Part B grantees that exclusively use the federal 340B rebate option to purchase their ADAP drugs typically contract with pharmacy networks or pharmacy benefit managers for the purchase of covered drugs who then request rebates from the pharmaceutical companies in order to obtain the 340B drug price and pass these savings on to the grantee. Under RWTMA, drug rebates that grantees receive are not considered part of the grant award and are not subject to the unobligated balance provisions. However, federal regulations generally applicable to state and local government grantees require them to disburse rebates (along with program income and certain other amounts) before requesting additional cash payments. Accordingly, HRSA requires rebates to be spent before grantees obligate additional grant funds. Thus, grantees receiving drug rebates must prioritize spending these funds and several grantees said that this makes it more difficult to obligate grant funds in the grant year. While only three of the nine states that had a reduction in their ADAP base grants for grant year 2009 due to an unobligated balance received rebates, five of the eight grantees we interviewed that received rebates expressed concern about the requirement that drug rebate funds be spent before grant funds. One grantee explained that though it did not have an unobligated balance for grant year 2007, it took a great deal of effort to avoid one. Before RWTMA and the budget challenges in this state, this grantee saved state funds to spend at the end of the grant year so it could ensure that Part B funds were obligated and rebate funds were spent. However, because of state spending requirements put in place due to economic factors this state is currently facing, the grantee can no longer do this. In addition, spending rebates first can be difficult because rebate states often do not know when they will receive rebates; the state may send out requests every quarter, but may not receive the rebates until well into the next quarter or grant year. Rebate states may also not know the rebate amount beyond what they can estimate based on trends over the past year. Several grantees said that because of the variability of the rebate amounts and their timing, they could receive a large rebate check late in the year. They then could have unobligated balances of grant funds of greater than 2 percent at that time because they use the rebate amounts when they become available rather than grant funds. Pennsylvania had an unobligated ADAP base grant balance of $12,670,248 in grant year 2007, and state officials said that a large part of the reason was its ADAP drug rebates. In grant year 2007, Pennsylvania received $11 million in rebates. These rebate funds had to be spent before it could obligate its ADAP base funding for grant year 2007. According to Pennsylvania officials, the Pennsylvania grantee has an administrative structure that only allows it to spend its rebates on the purchase of drugs, limiting how it could spend its rebate funds. Other states we spoke to can use rebate funds to provide Part B medical services as well, providing them with greater flexibility in spending these funds. Pennsylvania officials told us that they also had an unobligated balance of its ADAP base grant of over $2.4 million in grant year 2008. The Pennsylvania state government is working to revise its current structure. HRSA sought to address the interaction between drug rebate funds and the RWTMA unobligated balance provisions by requesting from HHS permission to seek an exemption from the regulation for grantees from the Office of Management and Budget. HRSA told us that requiring ADAP rebate funds to be spent before grant funds increases the risk of unobligated balance penalties, and that the loss of grant funding and ineligibility for supplemental funding can pose difficulties for grantees. HRSA requested permission to seek an exemption from the otherwise applicable federal regulations for drug rebate states from HHS. HRSA believes the unobligated balance requirements were intended to ensure that federal funds are spent promptly, not to create a mechanism through which federal grants would be reduced. However, HRSA’s request for permission to seek an exemption for drug rebate states was denied by HHS in November 2007. HHS stated that while federal regulations and the unobligated balance provisions create significant challenges for rebate states, the justification HRSA presented for the class deviation was “not compelling.” HHS provided technical comments on a draft of the report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this report. Other staff who made major contributions to this report are listed in appendix I. In addition to the contact above, Thomas Conahan, Assistant Director; Robert Copeland, Assistant Director; Leonard Brown; Romonda McKinney Bumpus; Cathleen Hamann; Sarah Resavy; Rachel Svoboda; and Jennifer Whitworth made key contributions to this report.
Funds are made available under the Ryan White Comprehensive AIDS Resources Emergency Act of 1990 (CARE Act) for individuals affected by HIV/AIDS. Part A provides for grants to metropolitan areas and Part B provides for grants to states and territories and associated jurisdictions for HIV/AIDS services and for AIDS Drug Assistance Programs (ADAP). The Ryan White HIV/AIDS Treatment Modernization Act of 2006 (RWTMA) reauthorized CARE Act programs for fiscal years 2007 through 2009. RWTMA requires name-based HIV case counts for determining CARE Act funding, but an exemption allows the use of code-based case counts through fiscal year 2009. RWTMA formulas include hold-harmless provisions that protect grantees' funding at specified levels. RWTMA also included provisions under which Part A and B grantees with unobligated balances over 2 percent at the end of the grant year incur a penalty in future funding. GAO was asked to examine CARE Act funding provisions. This report provides information on (1) how many Part B grantees collect and use name-based HIV case counts for CARE Act funding; (2) the distribution of Part A hold-harmless funding; and (3) reductions in Part B grantees' funding due to unobligated balance provisions. GAO reviewed agency documents and analyzed data on CARE Act funding. GAO interviewed 19 grantees chosen by geography, number of HIV/AIDS cases, and other criteria. GAO also interviewed federal government officials and other experts. Forty-seven of the total 59 Part B grantees had the Health Resources and Services Administration (HRSA) use their name-based HIV case counts to determine CARE Act formula funding for fiscal year 2009. The remaining 12 grantees had HRSA use their code-based HIV case counts to determine fiscal year 2009 CARE Act funding. If the exemption permitting code-based reporting is not extended, it is likely that future fiscal year funding will be based exclusively on name-based counts. Any Part B grantees who currently have name-based HIV reporting systems, but that had not been collecting name-based HIV case counts long enough to include all cases, could face a reduction in fiscal year 2010 funding. Part A hold-harmless funding was more widely distributed among eligible metropolitan areas (EMA) in fiscal year 2009 than in fiscal year 2004, the last year for which we reported this information. Seventy-one percent of EMAs received hold-harmless funding in fiscal year 2009, whereas 41 percent received hold-harmless funding in fiscal year 2004. In fiscal year 2009, $24,836,500 in hold-harmless funding was distributed compared to $8,033,563 in fiscal year 2004. However, the range of CARE Act hold-harmless funding among EMAs, as measured by funding per case, was smaller in 2009 than in 2004. In fiscal year 2009, EMAs received from $0 to $208 in hold-harmless funding per case. In fiscal year 2004, EMAs received between $0 and $1,020 in hold-harmless funding per case. The hold-harmless funding resulted in EMAs receiving formula funding ranging from $645 to $854 per case in fiscal year 2009 and from $1,221 to $2,241 per case in fiscal year 2004. Sixteen Part B grantees had reductions in their grant year 2009 funding due to their unobligated balances at the end of grant year 2007. Part B base grant penalties ranged from $6,433 in Palau to $1,493,935 in Ohio. ADAP base grant penalties ranged from $26,233 in Maine to $12,670,248 in Pennsylvania. Part B grantees with unobligated funds provided various reasons for these balances, and said that some of these reasons were beyond their control. Grantees and HRSA stated that a requirement to spend drug rebate funds before obligating federal funds makes it more difficult to avoid unobligated balances. Twenty- seven ADAPs purchase drugs exclusively through a federal drug discount program, under which they pay full price and receive a rebate at some point in the future. HRSA sought to address the interaction between drug rebate funds and the RWTMA unobligated balance provisions by requesting from the Department of Health and Human Services (HHS) permission to seek an exemption for grantees from the relevant regulations from the Office of Management and Budget. However, HHS denied this request, stating that the justification HRSA presented for requesting the exemption was "not compelling." HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
The federal real property portfolio is vast and diverse—totaling over 900,000 buildings and structures—including office buildings, warehouses, laboratories, hospitals, and family housing—worth hundreds of billions of dollars. The six largest federal property holders—DOD, GSA, the U.S. Postal Service, and the Departments of Veterans Affairs (VA), Energy, and the Interior—occupy 87.6 percent of the total square footage in federal buildings. Over all, the federal government owns approximately 83 percent of this space and leases or otherwise manages the rest; however, these proportions vary by agency. For example GSA, the central leasing agent for most agencies, now leases more space than it owns. After we designated federal real property as a high-risk area in 2003, the President signed Executive Order 13327 in February 2004, which established new federal property guidelines for 24 executive branch departments and agencies. Among other things, the executive order called for creating the interagency FRPC to develop guidance, collect best practices, and help agencies improve the management of their real property assets. DOD has undergone four BRAC rounds since 1988 and is currently implementing its fifth round. Generally, the purpose of prior BRAC rounds was to generate savings to apply to other priorities, reduce property deemed excess to needs, and realign DOD’s workload and workforce to achieve efficiencies in property management. As a result of the prior BRAC rounds in 1988, 1991, 1993, and 1995, DOD reported th had reduced its domestic infrastructure, transferred hundreds of thousands of acres of unneeded property to other federal and nonfederal entities, and saved billions of dollars annually that could be applied to other higher priority defense needs. The 2005 BRAC round affected hundreds of locations across the country through 24 major closures, 24 major realignments, and 765 lesser actions, which also included terminating leases and consolidating various activities. Legislation authorizing the 2005 BRAC round maintained requirements establishe the three previous BRAC rounds that GAO provide a detailed analysis of DOD’s recommendations and of the BRAC selection process. We submitted our report to Congress in July 2005 and testified before the BRAC Commission soon thereafter. Since that time, GAO has published annual reports on the progress, challenges, and costs and savings of the 2005 round, in addition to numerous reports on other aspects of implementing the 2005 BRAC round. When we designated federal real property management as high risk, we reported that the federal government faced a number of obstacles to effectively managing its real property. These included a lack of strategic focus on real property issues, a lack of reliable real property data, legal limitations, and stakeholder influences in real property decision making. In 2003, we reported that despite the magnitude and complexity of real- property-related problems, there had been no governmentwide strategic focus on real property issues. Not having a strategic focus can lead to ineffective decision making, such as choosing to rely too much on leasing for long-term government property needs. In 2008, we found that decisions to lease selected federal properties were not always driven by cost- effectiveness considerations. For example, we estimated that the decision to lease the Federal Bureau of Investigation’s field office in Chicago, Illinois, instead of constructing a building the government would own, cost about $40 million more over 30 years. GSA officials noted that limited availability of upfront capital was one of the reasons that prevented ownership at that time. Federal budget scorekeeping rules require the full cost of construction to be recorded up front in the budget, whereas only the annual lease payments plus cancellation costs need to be recorded for operating leases. In April 2007 and January 2008, we recommended that the Office of Management and Budget (OMB) develop a strategy to reduce agencies’ reliance on costly leasing where ownership would result in long- term savings. We noted that such a strategy could identify the conditions under which leasing is an acceptable alternative, include an analysis of real property budget scoring issues, and provide an assessment of viable alternatives. OMB concurred with this recommendation but has not yet developed a strategy to reduce agencies’ reliance on leasing. In 2003, we found that a lack of reliable real property data compounded real property management problems. The governmentwide data maintained at that time were unreliable, out of date, and of limited value. In addition, certain key data that would be useful for budgeting and strategic management were not being maintained, such as data on space utilization, facility condition, historical significance, security, and age. We also found that some of the major real-property-holding agencies faced challenges developing reliable data on their real property assets. We noted that reliable governmentwide and agency-specific real property data are critical for addressing real property management challenges. For example, better data would help the government determine whether assets are being used efficiently, make investment decisions, and identify unneeded properties. In our February 2011 high-risk update, we noted that a third obstacle to consolidating federal properties is the legal requirements agencies must adhere to before disposing of a property, such as requirements for screening and environmental cleanup. Currently, before GSA can dispose of a property that a federal agency no longer needs, it is required to offer the property to other federal agencies. If other federal agencies do not have a need for the property, GSA must then make the property available to state and local governments and certain nonprofit organizations and institutions for public benefit uses such as homeless shelters, educational facilities, or fire or police training centers. As a result of this lengthy process, GSA’s underutilized or excess properties may remain in an agency’s possession for years and continue to accumulate maintenance and operations costs. Further complicating this issue is that different agencies have different authorities to enter into leases with public and private entities for the use of federal property, to sell real property, and to retain the proceeds from these transactions. For example, DOD has the authority to both enter into these leases and retain proceeds for the sale of properties, but the Department of Justice does not have the authority to do either. In addition, federal agencies are required by law to assess and pay for any environmental cleanup that may be needed before disposing of a property—a process that may require years of study and result in significant costs. In some cases, the cost of the environmental cleanup may exceed the costs of continuing to maintain the excess property in a shut-down status. We have also noted that the National Historic Preservation Act, as amended, requires agencies to manage historic properties under their control and jurisdiction and to consider the effects of their actions on historic preservation. Since properties more than 50 years old are eligible for historic designation and the average age of properties in GSA’s portfolio is 46 years, this issue will soon become critically important to GSA. Local stakeholders—including local governments, business interests, private real estate interests, sector construction and leasing firms, historic preservation organizations, various advocacy groups for citizens that benefit from federal programs, and the public in general—often view federal facilities as the physical face of the federal government in their communities. The interests of these multiple and often competing stakeholders may not always align with the most efficient use of government resources and can complicate real property decisions. For example, as we first reported in 2007, VA officials noted that stakeholders and constituencies, such as historic building advocates or local communities that want to maintain their relationship with VA, often prevent the agency from disposing of properties. In 2003, we indicated that an independent commission or governmentwide task force might be necessary to help overcome stakeholder influences in real property decision making. The administration and real-property-holding agencies have made progress in a number of areas since we designated federal real property as high risk in 2003. Specifically, the federal government has taken steps toward strategically managing its real property and improving the reliability of its real property data. However, many problems related to unneeded property and leasing persist because the government has not addressed the underlying legal limitations and stakeholder influences which we identified. As part of the government’s efforts to strategically manage its real property, the administration established FRPC—a group composed of the OMB Controller and the senior real property officers of landholding agencies—to support real property reform efforts. Through FRPC, the landholding agencies have also established asset management plans, standardized real property data reporting, and adopted various performance measures to track progress. The asset management plans are updated annually and help agencies take a more strategic approach to real property management by indicating how real property moves the agency’s mission forward, outlining the agency’s capital management plans, and describing how the agency plans to operate its facilities and dispose of unneeded real property, including listing current and future disposal plans. Although several FRPC member agencies said that the body no longer meets regularly, it remains a forum for agency coordination on real property issues and could serve a larger role in future real property management. In our February 2011 high-risk update, we reported that the federal government has also taken numerous steps since 2003 to improve the completeness and reliability of its real property data. FRPC, in conjunction with GSA, established the Federal Real Property Profile (FRPP) to meet a requirement in Executive Order 13327 for a single real property database that includes all real property under the control of executive branch agencies. FRPP contains asset-level information submitted annually by agencies on 25 high-level data elements, including four performance measures that enable agencies to track progress in achieving property management objectives. In response to our 2007 recommendation to improve the reliability of FRPP data, OMB required, and agencies implemented, data validation plans that include procedures to verify that the data are accurate and complete. Furthermore, GSA’s Office of Governmentwide Policy (OGP), which administers the FRPP database, instituted a data validation process that precludes FRPP from accepting an agency’s data until the data pass all established business rules and data checks. In our most recent analysis of the reliability of FRPP data, we found none of the basic problems we have previously found, such as missing data or inexplicably large changes between years. In addition, agencies continue to improve their real property data for their own purposes. From a governmentwide perspective, OGP has sufficient standards and processes in place for us to consider the 25 elements in FRPP as a database that is sufficiently reliable to describe the real property holdings of the federal government. Consequently, we removed the data element of real property management from the high-risk list this year. In 2007, we recommended that OMB, which is responsible for reviewing agencies’ progress on federal real property management, assist agencies by developing an action plan to address the key problems associated with decisions related to unneeded real property, including stakeholder influences. OMB agreed with the recommendation but has yet to implement it. However, the administration’s recently proposed legislative framework, CPRA, is somewhat responsive to this recommendation in that it addresses both legal limitations and stakeholder influences in real property decision making. According to the proposal, the purpose of CPRA would be, in part, to “streamline the current legal framework” and “facilitate the disposal of those unneeded civilian real properties that are currently subject to legal restrictions that prevent their disposal.” The proposal itself, however, does not describe how this streamlining would be accomplished. To address stakeholder influences, CPRA would create an independent board to recommend federal properties for disposal or consolidation after receiving recommendations from civilian landholding agencies. Grouping all disposal and consolidation decisions into one list that Congress would vote on in its entirety could help to blunt local stakeholder influences at any individual site. In addition, CPRA could help to reduce the government’s overreliance on leasing by recommending that the government consolidate operations from leased space to owned space where efficient. In our prior work on the BRAC process, we identified certain key elements underpinning the process, which may be applicable to the management of real property governmentwide. The BRAC process was designed to address certain challenges to closures or realignments, including stakeholder interests, thereby permitting DOD to dispose of installations or realign its missions to better use its facilities and generate savings. The 2005 BRAC round followed a historical analytical framework, carrying many elements of the process forward or building upon lessons learned from previous rounds. DOD also established a structured process for obtaining and analyzing data that provided a consistent basis for identifying and evaluating closure and realignment recommendations, and DOD used a logical, reasoned, and well-documented process. In addition, we have identified lessons learned from DOD’s 1988, 1991, 1993, and 1995 rounds, and we have begun an effort to assess lessons learned from the 2005 BRAC round. DOD’s 2005 BRAC Process DOD’s 2005 BRAC process consisted of a series of legislatively-prescribed steps, as follows: DOD began to develop options for closure or realignment recommendations. The military departments developed service-specific installation closure and realignment options. In addition, the Office of the Secretary of Defense established seven joint cross-service teams, called joint cross-service groups, to develop options across common business- oriented functions, such as medical, supply storage, and administrative activities. These closure and realignment options were reviewed by DOD’s Infrastructure Executive Council—a senior-level policy-making and oversight body for the entire process. Options approved by this council were submitted to the Secretary of Defense for his review and approval. DOD developed hundreds of closure or realignment options for further analysis which eventually led to DOD’s submitting over 200 recommendations to the BRAC Commission for analysis and review. BRAC Commission performed an independent review of DOD’s recommendations. After DOD selected its base closure and realignment recommendations, it submitted them to the BRAC Commission, which performed an independent review and analysis of DOD’s recommendations. The Commission could approve, modify, reject, or add closure and realignment recommendations. Also, the BRAC Commission provided opportunities to interested parties, as well as community and congressional leaders, to provide testimony and express viewpoints. The Commission then voted on each individual closure or realignment recommendation, and those that were approved were included in the Commission’s report to the President. In 2005, the BRAC Commission reported that it had rejected or modified about 14 percent of DOD’s closure and realignment recommendations. President approved BRAC recommendations. After receiving the recommendations, the President was to review the recommendations of the Secretary of Defense and the Commission and prepare a report by September 23, 2005, containing his approval or disapproval of the Commission’s recommendations as a whole. Had the President disapproved of the Commissions’ recommendations, the Commission would have had until October 20, 2005, to submit a revised list of recommendations to the President for further consideration. If the President had not submitted a report to Congress of his approval of the Commissions recommendations by November 7, 2005, the BRAC process would have been terminated. The President submitted his report and approval of the 2005 Commission’s recommendations on September 15, 2005. Congress allowed the recommendations to become binding. After the President transmitted his approval of the Commission’s recommendations to Congress, the Secretary of Defense would have been prohibited from implementing the recommendations if Congress had passed a joint resolution of disapproval within 45 days of the date of the President’s submission or the adjournment of Congress for the session, whichever was sooner. Since Congress did not pass such a resolution, the recommendations became binding in November 2005. Congress established clear time frames for implementation. The BRAC legislation required DOD to complete recommendations for closing or realigning bases made in the BRAC 2005 round within the 6-year time frame ending on September 15, 2011, 6 years from the date the President submitted his approval of the recommendations to Congress. In July 2010, in our most recent report on the implementation of the 2005 BRAC recommendations, we reported that many DOD locations are scheduled to complete actions to implement the recommendations within months of the deadline, leaving little or no margin for slippage to finish constructing buildings and to move or hire the needed personnel. In developing its recommendations for the BRAC Commission, DOD relied on certain elements, as follows: Establish goals for the BRAC process. Prior to the start of the 2005 BRAC round, the Secretary of Defense emphasized the importance of transforming the military to make it more efficient as part of the 2005 BRAC round. Other goals for the 2005 BRAC process included fostering jointness among the four military services, reducing excess infrastructure, and producing savings. Prior rounds were more about reducing excess infrastructure and producing savings. Develop criteria for evaluating closures and realignments. DOD initially proposed eight selection criteria, which were made available for public comments via the Federal Register. Ultimately, Congress enacted the eight final BRAC selection criteria. In authorizing the 2005 BRAC round, Congress specified that the following four selection criteria, known as the “military value criteria,” were to be given priority in developing closure and realignment recommendations: current and future mission capabilities and the impact on operational readiness of the total force, availability and condition of land, facilities, and associated airspace at both the existing and the potential receiving locations, ability to accommodate a surge in the force and future total force requirements at both the existing and the potential receiving location to support operations and training, and costs of operations and personnel implications. In addition to military value, Congress specified that DOD was to apply the following “other criteria” in developing its recommendations: costs and savings associated with a recommendation, economic impact on local communities near the installations, ability of infrastructure to support forces, missions, and personnel, and environmental impact. Additionally, Congress required that the Secretary of Defense develop and submit to Congress a force structure plan that laid out the numbers, size, and composition of the units that comprise U.S. defense forces, for example, divisions, ships, and air wings, based on the Secretary’s assessment of the probable national security threats for the 20-year period beginning in 2005, along with a comprehensive inventory of global military installations. In authorizing the 2005 BRAC round, Congress specified that the Secretary of Defense publish a list of recommendations for the closure and realignment of military installations inside the United States based on the force-structure plan and infrastructure inventory, and on the eight final selection criteria. Estimate costs and savings to implement closure and realignment recommendations. To address the cost and savings criteria, DOD developed and used the Cost of Base Realignment Actions model—known as COBRA—a quantitative tool that DOD has used since the 1988 BRAC round to provide consistency in potential cost, savings, and return-on- investment estimates for closure and realignment options. We reviewed the COBRA model as part of our review of the 2005 and prior BRAC rounds and found it to be a generally reasonable estimator for comparing potential costs and savings among alternatives. As with any model, the quality of the output is a direct function of the input data. Also, the COBRA model relies to a large extent on standard factors and averages and does not represent budget quality estimates that are developed once BRAC decisions are made and detailed implementation plans are developed. Nonetheless, the financial information provides important input into the selection process as decision makers weigh the financial implications—along with military value and other factors—in arriving at final decisions regarding the suitability of various closure and realignment options. However, based on our assessment of the 2005 BRAC round, actual costs and savings were different from estimates. As we reported in November 2009, BRAC one-time implementation costs have risen to almost $35 billion in fiscal year 2010 compared with DOD’s initial estimate of $21 billion in 2005. Similarly, net annual recurring savings have dropped to $3.9 billion in fiscal year 2010 compared with the $4.2 billion DOD estimated in 2005. Establish a common analytical framework. To ensure that the selection criteria were consistently applied, the Office of the Secretary of Defense required the military services and the seven joint cross-service groups to first perform a capacity analysis of facilities and functions at specific locations prior to developing recommendations. The capacity analysis relied on data calls to hundreds of locations to obtain certified data to assess such factors as maximum potential capacity, current capacity, current usage, and excess capacity. Then, the military services and joint cross-service groups performed a military value analysis for the facilities and functions that included a facility’s or function’s current and future mission capabilities, physical condition, ability to accommodate future needs, and cost of operations. Establish an organizational structure. As previously mentioned, the Office of the Secretary of Defense emphasized the need for joint cross- service groups to analyze common business-oriented functions. For the 2005 round, as for the 1993 and 1995 rounds, these joint cross-service groups performed analyses and developed closure and realignment options in addition to those developed by the military departments. In contrast, our evaluation of DOD’s 1995 round indicated that few cross- service recommendations were made, in part because of the lack of high- level leadership to encourage consolidations across the services’ functions. In the 1995 round, the joint cross-service groups submitted options through the military departments for approval, resulting in few being approved. The number of approved recommendations that the joint cross-service groups developed significantly increased in the 2005 round. This was due, in part, to high-level leadership’s ensuring that the options were approved not by the military services but rather by a DOD senior- level group. Also, one of these joint cross-service groups developed a number of recommendations to realign administrative-type functions out of leased space into DOD-owned facilities. Involve the audit community to better ensure data accuracy. The DOD Inspector General and military service audit agencies played key roles in identifying data limitations, fostering corrections, and improving the accuracy of the data used in the process. The oversight roles of the audit organizations, given their access to relevant information and officials as the process evolved, helped to improve the accuracy of the data used in the BRAC process and added an important aspect to the quality and integrity of the data used to develop closure and realignment recommendations. There are a number of important similarities and differences between BRAC and a civilian process as proposed in CPRA. As a similarity, both BRAC and CPRA employ the all-or-nothing approach to disposals and consolidations, meaning that once the final list is approved, it must be approved or rejected as a whole. This approach can help overcome stakeholders’ interests. Another similarity may be the need for a phased approach. Through the five prior BRAC rounds, DOD has reduced its domestic infrastructure, transferred hundreds of thousands of acres of unneeded property to other federal and nonfederal entities, and saved funds for application to higher priority defense needs. Similarly, it may take several BRAC-like rounds to complete the disposals and consolidations of civilian real property owned and leased by many disparate agencies including GSA, VA, Interior, and the Department of Energy. On the other hand, an important difference in the two processes may be the role of the independent board. DOD has participated in the BRAC process by generating lists of bases to close and realign that the last four BRAC Commissions have then reviewed. On the civilian side, however, agencies would provide recommendations to the proposed civilian board, but the board would ultimately be responsible for developing the lists of disposals and consolidations. In closing, the government has made strides toward strategically managing its real property and improving its real property planning and data over the last 10 years, but those efforts have not yet led to sufficient reductions in excess property and overreliance on leasing. DOD’s experiences with BRAC, including establishing criteria and a common analytical framework up front, could help this effort move forward. Chairman Denham, Ranking Member Norton, and Members of the Subcommittee, this concludes our prepared statement. We will be pleased to answer any questions that you may have at this time. For further information on this testimony, please contact David Wise at (202) 512-2834 or wised@gao.gov regarding federal real property, or Brian Lepore at (202) 512-4523 or leporeb@gao.gov regarding the Base Realignment and Closure process. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. In addition to the contacts named above, Keith Cunningham, Assistant Director; Laura Talbott, Assistant Director; Vijay Barnabas; Hilary Benedict; Jessica Bryant-Bertail; Elizabeth Eisenstadt; Sarah Farkas; Susan Michal-Smith; and Michael Willems made important contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government holds more than 45,000 underutilized properties that cost nearly $1.7 billion annually to operate, yet significant obstacles impede efforts to close, consolidate, or find other uses for them. In January 2003, GAO designated federal real property management as a high-risk area, in part because of the number and cost of these properties. The Office of Management and Budget (OMB) is responsible for reviewing federal agencies' progress in real property management. In 2007, GAO recommended that OMB assist agencies by developing an action plan to address key obstacles associated with decisions related to unneeded real property, including stakeholder influence. The President's fiscal year 2012 budget proposed establishing a legislative framework for disposing of and consolidating civilian real property, referred to as a Civilian Property Realignment Act (CPRA), which may be designed to address stakeholder influences in real property decision making. This testimony identifies (1) obstacles to effectively managing federal real property, (2) actions designed to overcome those obstacles, including government actions and CPRA, and (3) key elements of the Department of Defense's (DOD) base realignment and closure (BRAC) process that are designed to help DOD close or realign installations and may be relevant for CPRA. To do this work, GAO reviewed GAO reports, other reports, and CPRA. In designating federal real property management as a high-risk issue in 2003, GAO found that the federal government faced a number of obstacles to effectively managing its real property. These included its lack of strategic focus on real property issues, a lack of reliable real property data, legal limitations, and stakeholder influence. That year, GAO reported that despite the magnitude and complexity of real-property-related problems, there was no governmentwide strategic focus on real property issues and that governmentwide data were unreliable and outdated. GAO also reported then that before disposing of excess property, the General Services Administration is legally required to follow a lengthy screening process, which includes offering the property to other federal agencies and other entities for public uses. Furthermore, stakeholders--including local governments, private real estate interests, and advocacy groups--may have different interests that do not always align with the most efficient use of government resources. Since 2003, the federal government has taken steps to address some of these obstacles and improve its real property management. For instance, the administration and real-property-holding agencies have improved their strategic management of real property by establishing an interagency Federal Real Property Council designed to enhance real property planning processes. The government has also implemented controls to improve the reliability of federal real property data. However, many problems related to unneeded property and leasing have persisted because legal limitations and stakeholder influences remain. GAO's 2007 recommendation that OMB develop an action plan is designed to address these problems. In addition, CPRA proposes an independent board to identify facilities for disposal and consolidation, which could streamline legal requirements and mitigate stakeholder influences. Congress authorized DOD to undergo five BRAC rounds to reduce excess property and realign DOD's workload to achieve efficiencies and savings in property management. The BRAC process, much like CPRA, was designed to address obstacles to closures or realignments, thus permitting DOD to close installations or realign its missions to better use its facilities and generate savings. GAO's prior work on the BRAC process identified certain key elements that may be applicable to managing civilian real property, such as establishing goals and an organizational structure, developing criteria and an analytical framework, using a model to estimate costs and savings, and involving the audit community to better ensure data accuracy. A key similarity between BRAC and CPRA is that both establish an independent board that reviews agency recommendations; a key difference is that the BRAC process created criteria for selecting installations for realignment while CPRA does not include specific criteria to be used to select properties for disposal or consolidation.
In 1992 the National Commission on Severely Distressed Public Housing (the Commission) reported that approximately 86,000, or 6 percent, of the nation’s public housing units were severely distressed—characterized by physical deterioration and uninhabitable living conditions, high levels of poverty, inadequate and fragmented services, institutional abandonment, and location in neighborhoods often as blighted as the public housing sites themselves. In response to the Commission’s report, Congress established the Urban Revitalization Demonstration Program, more commonly known as HOPE VI, at HUD. The program awards grants to public housing authorities (PHA). The grants can fund, among other things, the demolition of distressed public housing, capital costs of major rehabilitation, new construction, and other physical improvements, and community and supportive service programs for residents, including those relocated as a result of revitalization efforts. Beginning in 1996 with the adoption of the Mixed-Finance Rule, PHAs were allowed to use public housing funds designated for capital improvements, including HOPE VI funds, to leverage other public and private investment to develop public housing units. Public funding can come from federal, state, and local sources. For example, HUD itself provides capital funding to housing agencies to help cover the costs of major repair and modernization of units. Private sources can include mortgage financing and financial or in- kind contributions from nonprofit organizations. HUD’s requirements for HOPE VI revitalization grants are laid out in each fiscal year’s notice of funding availability (NOFA) and grant agreement. NOFAs announce the availability of funds and contain application requirements, threshold requirements, rating factors, and the application selection process. Grant agreements, which change each fiscal year, are executed between each grantee and HUD and specify the activities, key deadlines, and documentation that grantees must meet or complete. NOFAs and grant agreements also contain guidance on resident involvement in the HOPE VI process. HUD encourages grantees to communicate, consult, and collaborate with affected residents and the broader community, but allows grantees the final decision-making authority. Grant applications are screened to determine whether they meet the eligibility and threshold requirements in the NOFA. A review panel (which may include the Deputy Assistant Secretary for Public Housing Investments, the Assistant Secretary for Public and Indian Housing, and other senior HUD staff) recommends the most highly rated applications for selection, subject to the amount available for funding. HUD’s Office of Public Housing Investments, housed in the Office of Public and Indian Housing, manages the HOPE VI program. Grant managers within the Office of Public Housing Investments are primarily responsible for overseeing HOPE VI grants. They approve changes to the revitalization plan and coordinate the review of the community and supportive services plan that each grantee submits. In addition, grant managers track the status of grants by analyzing data on the following key activities: relocation of original residents, demolition of distressed units, new construction or rehabilitation, reoccupancy by some original residents, and occupancy of completed units. Public and Indian Housing staff located in HUD field offices also play a role in overseeing HOPE VI grants, including coordinating and reviewing construction inspections. Beginning in fiscal year 1999, HUD began to encourage HOPE VI revitalization grant applicants to form partnerships with local universities to evaluate the impact of their proposed HOPE VI revitalization plans. In 2003, Congress reauthorized the HOPE VI program and required us to report on the extent to which public housing for the elderly and non- elderly persons with disabilities was severely distressed. We subsequently reported that available data on the physical and social conditions of public housing are insufficient to precisely determine the extent to which developments occupied primarily by elderly persons and non-elderly persons with disabilities are severely distressed. Using HUD’s data on public housing developments—buildings or groups of buildings—-and their tenants, we identified 3,537 developments primarily occupied by elderly residents and non-elderly persons with disabilities. Data from HUD and other sources indicated that 76 (2 percent) of these 3,537 developments were potentially severely distressed. According to our analysis of HUD data for our November 2002 report, housing authorities expected to leverage an additional $1.85 in funds from other sources for every dollar received in HOPE VI revitalization grants awarded since the program’s inception through fiscal year 2001. However, HUD considered the amount of leveraging to be slightly higher because it treated as “leveraged” both (1) HOPE VI grant funds competitively awarded for the demolition of public housing units and (2) other public housing capital funds that the housing authorities would receive even in the absence of the revitalization grants. Even when public housing funds were excluded from leveraged funds, our analysis of HUD data showed that projected leveraging had increased; for example, 1993 grantees expected to leverage an additional $0.58 for every HOPE VI grant dollar (excluding public housing funds), while 2001 grantees expected to leverage an additional $2.63 from other sources (excluding public housing funds). But, our analysis of HUD data through fiscal year 2001 also indicated that 79 percent of funds that PHAs had budgeted came from federal sources, when low-income housing tax credit funding was included. Finally, our analysis showed that although the majority of funds budgeted overall for supportive services were HOPE VI funds, the amount of non-HOPE VI funds budgeted for supportive services increased dramatically since the program’s inception. Specifically, while 22 percent of the total funds that fiscal year 1997 grantees budgeted for supportive services were leveraged funds, 59 percent of the total that fiscal year 2001 grantees budgeted were leveraged funds. Although HUD had been required to report leveraging and cost information to the Congress annually since 1998, it had not done so at the time of our 2002 report. As required by law, this annual report is to include the cost of public housing units revitalized under the program and the amount and type of financial assistance provided under and in conjunction with the program. We recommended that the Secretary of Housing and Urban Development provide these annual reports to Congress and include in these annual reports, among other things, information on the amounts and sources of funding used at HOPE VI sites, including equity raised from low-income housing tax credits, and the total cost of developing public housing units at HOPE VI sites, including the costs of items subject to HUD’s development cost limits and those not subject. In response to this recommendation, HUD issued annual reports to Congress for fiscal years 2002 through 2006 that include information on the amounts and sources of funding used at HOPE VI sites. In each of these reports, HUD included the amount of funds leveraged from low- income housing tax credits in its data on non-federal funds. Based on data reported in the 2006 annual report, since the program’s inception HOPE VI grantees have cumulatively leveraged $1.28 per HOPE VI grant dollar expended. Currently, we have work underway examining, among other things, how and the extent to which leveraging occurs in several federal programs, including the HOPE VI program. Our May 2003 report found that a variety of factors diminished HUD’s ability to oversee HOPE VI grants. In particular, the limited numbers of grant managers, a shortage of field office staff, and confusion about the role of field offices had diminished the agency’s ability to oversee HOPE VI grants. Our site visits showed that HUD field staff was not systematically performing required annual reviews. For example, for revitalization grants awarded in 1996, some never received an annual review and no grant had had an annual review performed each year since the grant award. From our interviews with field office managers, we determined that there were two reasons why annual reviews were not performed. First, many of the field office managers we interviewed stated that they simply did not have enough staff to get more involved in overseeing HOPE VI grants. Second, some field offices did not seem to understand their role in HOPE VI oversight. For instance, one office thought that the annual reviews were primarily the responsibility of the grant managers. Others stated that they had not performed the reviews because construction had not yet started at the sites in their jurisdiction or because they did not think they had the authority to monitor grants. As a result of our findings, we recommended that HUD clarify the role of HUD field offices in HOPE VI oversight and ensure that the offices conducted required annual reviews. In response to this recommendation, HUD published new guidance in March 2004 that clarified the role of HUD field offices in HOPE VI oversight and the annual review requirements. According to the guidance, HUD field office responsibilities include conducting an annual risk assessment, which should consider such factors as missed deadlines and adverse publicity and should be used to determine whether an on-site review should be conducted and which areas of the HOPE VI grant should be reviewed. The published guidance included a risk assessment form and sample monitoring review reports. While HUD’s action was responsive to our recommendation, we have not examined the extent to which it has corrected the problems we identified in our 2003 report. Our 2003 report also noted that the status of work at HOPE VI sites varied, and that the majority of grantees had missed one or more of three major deadlines specified in their grant agreements: the submission of a revitalization plan to HUD, the submission of a community and supportive services plan to HUD, and completion of construction. We made recommendations to HUD designed to ensure better compliance with grant agreements. More specifically: Of the 165 sites that received revitalization grants through fiscal year 2001, 15 had completed construction at the time of our review. Overall, at least some units had been constructed at 99 of the 165 sites, and 47 percent of all HOPE VI funds had been expended. In general, we found that the more recently awarded grants were progressing more quickly than earlier grants. For example, fiscal year 1993 grantees had taken an average of 31 months to start construction. In contrast, the fiscal year 2000 grantees started construction an average of 10 months after their grant agreement was executed. HUD cited several reasons that may explain this improvement, such as later grantees having more capacity than the earlier grantees, the applications submitted in later years being more fully developed to satisfy NOFA criteria, and HUD placing greater emphasis on reporting and accountability. To further improve its selection of HOPE VI grantees, we recommended that HUD continue to include past performance as an eligibility requirement in each year’s NOFA—that is, to take into account how housing authorities had performed under any previous HOPE VI grant agreements. In response to this recommendation, HUD stated in its fiscal year 2004 NOFA that a HOPE VI application would not be rated or ranked, and would be ineligible for funding, if the applicant had an existing HOPE VI revitalization grant and (1) development was delinquent due to actions or inactions that were not beyond the control of the grantee and (2) the grantee was not making substantial progress towards eliminating the delinquency. According to the fiscal year 2006 NOFA, the ratings of applicants that received HOPE VI grants between 1993 and 2003 can be lowered for failure to achieve adequate progress. For at least 70 percent of the grants awarded through fiscal year 1999, grantees had not submitted their revitalization plans or community and supportive services plans to HUD on time. Moreover, the large majority of grantees had also missed their construction deadlines; in the case of 9 grants, no units had been constructed as of the end of December 2002. HUD had taken some steps to encourage adherence to its deadlines; for example, HUD began requiring applicants to provide a certification stating that they had either procured a developer for the first phase of development, or that they would act as their own developer. However, HUD did not have an official enforcement policy to deal with grantees that missed deadlines. As a result, we recommended that HUD develop a formal, written enforcement policy to hold public housing authorities accountable for the status of their grants. HUD agreed with this recommendation, and in December, 2003 notified several grantees that they were nearing deadlines and that failure to meet these deadlines could result in HUD placing the grant in default. According to the 2006 NOFA, HUD may withdraw funds from grantees that have not proceeded within a reasonable timeframe, as outlined in their program schedule. In our November 2003 report, we found that most residents at HOPE VI sites had been relocated to other public housing, or other subsidized housing, and that grantees expected that about half of the original residents would return to the revitalized sites. In our examination of sites that had received HOPE VI grants in 1996, we found that the housing authorities had involved public housing residents in the planning and implementation process to varying degrees. Further, HUD data and information obtained during our site visits suggested that the supportive services provided public housing residents yielded at least some positive outcomes. Finally, according to our analysis of census and other data, the neighborhoods in which 1996 HOPE VI sites are located had generally experienced positive improvements in educational attainment levels, average household income, and percentage of people in poverty, although we were unable to determine the extent to which the HOPE VI program contributed to these changes. According to HUD data, approximately 50 percent of the almost 49,000 residents that had been relocated as of June 30, 2003, had been relocated to other public housing; about 31 percent had used vouchers to rent housing in the private market; approximately 6 percent had been evicted; and about 14 percent had moved without giving notice or vacated for other reasons. However, because HUD did not require grantees to report the location of original residents until 2000, grantees had lost track of some original residents. Although grantees, overall, expected that 46 percent of all the residents that occupied the original sites would return to the revitalized sites, the percentage varied greatly from site to site. A variety of factors may have affected the expected return rates, such as the numbers and types of units to be built at the revitalized site and the criteria used to select the occupants of the new public housing units. We found that the extent to which the 1996 grantees involved residents in the HOPE VI process varied. Although all of the 1996 grantees held meetings to inform residents about revitalization plans and solicit their input, some of them took additional steps to involve residents in the HOPE VI process. For example, in Tucson, Arizona, the housing authority waited until the residents had voted their approval before submitting the revitalization plan for the Connie Chambers site to the city council. In other cases, litigation or the threat of litigation ensured resident involvement. For instance, under a settlement agreement, the Chicago Housing Authority’s decisions regarding the revitalization of Henry Horner Homes were subject to the approval of the Horner Resident Committee. Overall, based on the information available at the time of our 2003 report, grantees had provided a variety of community and supportive services, including case management and direct services such as computer and job training programs. Grantees had also used funds set aside for community and supportive services to construct facilities where services were provided by other entities. Information we collected during our visits to the 1996 sites, as well as limited HUD data on all 165 grants awarded through fiscal year 2001, indicated that HOPE VI community and supportive services had achieved or contributed to positive outcomes. For example, 31 of 49 participants in a Housing Authority of Pittsburgh health worker training program had obtained employment, while 114 former project residents in Louisville, Kentucky had enrolled in homeowner counseling and 34 had purchased a home. According to our analysis of census and other data, the neighborhoods in which 1996 HOPE VI sites are located generally have experienced improvements in a number of indicators used to measure neighborhood change, such as educational attainment levels, average housing values, and percentage of people in poverty. For example, our analysis showed that in 18 of 20 HOPE VI neighborhoods, the percentage of the population with a high school diploma increased, in 13 neighborhoods average housing values increased, and in 14 neighborhoods the poverty rate decreased between 1990 and 2000. For a number of reasons—such as relying on 1990 and 2000 census data even though HOPE VI sites were at varying stages of completion—we could not determine the extent to which HOPE VI contributed to these changes. However, we found that several studies conducted by universities and private institutions also showed that the neighborhoods in which HOPE VI sites are located had experienced positive changes in income, employment, community investment, and crime indicators. For example, one study found that per capita income in eight selected HOPE VI neighborhoods increased an average of 71 percent, compared with 14.5 percent for the cities in which these sites are located, between 1989 and 1999. We also observed that the HOPE VI program also may influence changes in neighborhood indicators by demolishing older, distressed public housing alone. For example, in the 6 HOPE VI neighborhoods where the original public housing units were demolished, but no on-site units had been completed, measured educational attainment and income levels increased. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact David G. Wood at (202) 512-8678. Individuals making key contributions to this testimony included Alison Gerry, John McGrail, Lisa Moore, Paul Schmidt, and Mijo Vodopic. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since fiscal year 1992, the Department of Housing and Urban Development (HUD) has awarded more than $6 billion in HOPE VI program grants to public housing authorities to revitalize severely distressed public housing and provide supportive services to residents. HUD has encouraged housing authorities to use their HOPE VI grants to attract, or leverage, funding from other sources, including other federal, state, local, and private-sector sources. Projects funded with public and private funds are known as mixed-finance projects. This testimony is based primarily on three reports that GAO issued between November 2002 and November 2003, focusing on (1) the financing of HOPE VI projects, including the amounts of funds leveraged from non-HOPE VI sources; (2) HUD's oversight and administration of the program; and (3) the program's effects on public housing residents and neighborhoods surrounding HOPE VI sites. As requested, the statement summarizes the key findings from these reports, the recommendations GAO made to HUD for improving HOPE VI program management, and HUD's actions in response to the recommendations. In its November 2002 report, GAO found that housing authorities expected to leverage--for each HOPE VI dollar received--$1.85 in funds from other sources, and that the authorities projected generally increasing amounts of leveraged funds. GAO also found that even with the general increase in projected leveraging, 79 percent of the budgeted funds in mixed-finance projects that HUD had approved through fiscal year 2001 came from federal sources. GAO recommended that HUD provide the Congress with annual reports on the HOPE VI program, as required by statute, and provide data on the amounts and sources of funding used at HOPE VI sites. HUD has submitted these reports to Congress since fiscal year 2002. According to the 2006 report, HOPE VI grantees have cumulatively leveraged, from the program's inception through the second quarter of fiscal year 2006, $1.28 for every HOPE VI grant dollar expended. In its May 2003 report, GAO found that HUD's oversight of the HOPE VI program had been inconsistent for several reasons, including a shortage of grant managers and field office staff and confusion about the role of field offices. A lack of enforcement policies also hampered oversight; for example, HUD had no policy regarding when to declare a grantee in default of the grant agreement or apply sanctions. GAO made several recommendations designed to improve HUD's management of the program. HUD concurred with these recommendations and has taken actions in response, including publishing guidance outlining the oversight responsibility of field offices and notifying grantees that they would be in default of their grant agreement if they fail to meet key deadlines. In its November 2003 report, GAO found that most of the almost 49,000 residents that had been relocated as of June 2003 had moved to other public or subsidized housing; small percentages had been evicted, moved without giving notice, or vacated for other reasons. Grantees expected that about half of the original residents would return to the revitalized sites. Limited HUD data and information obtained during GAO's site visits suggested that the grantee-provided community and supportive services had yielded some positive outcomes, such as job training and homeownership. Finally, GAO's analysis of Census and other data showed that neighborhoods surrounding 20 HOPE VI sites (awarded grants in 1996) experienced improvements in several indicators used by researchers to measure neighborhood change, such as educational attainment levels, average household income, and percentage of people in poverty. However, for a number of reasons, GAO could not determine the extent to which the HOPE VI program was responsible for the changes.
The Congress has long recognized the value of allowing laboratories to conduct a certain amount of discretionary research. The current LDRD program grew out of legislation enacted in 1977 that authorized the use of a reasonable amount of laboratory funds to conduct employee-suggested, research and development (R&D) projects selected at the discretion of the laboratory directors. DOE’s implementation of its authority to conduct discretionary research evolved over the years. For example, in 1983, DOE Order 5000.1A formally established a discretionary R&D program called Exploratory Research and Development (ER&D); in 1992, DOE Order 5000.4A established the current LDRD program, which includes the previously defined ER&D program and other discretionary work, and further memorialized its long-standing policy of allowing its multi-program national laboratories the discretion to conduct self-initiated, independent R&D; and in 1997, DOE revised its LDRD program direction in DOE Order 413.2 providing clearer guidance on how LDRD funds may and may not be used. Each of DOE’s nine multi-program national laboratories has an LDRD program. Funding for LDRD projects comes from existing program budgets. Historically, this has been accomplished by allowing each laboratory to assess its program’s budgets at a set rate of up to 6 percent and accumulate that money into an overhead account for its LDRD program. DOE’s field offices oversee each laboratory’s LDRD program by approving the laboratory’s spending plans and making sure that projects comply with guidelines. DOE also approves each laboratory’s processes and procedures for selecting, reviewing, and tracking LDRD projects and requires annual reports from each laboratory. DOE conducts periodic reviews of the laboratories’ management that encompass the LDRD program. DOE’s nine multi-program laboratories have invested over $2 billion on LDRD projects since 1992, when the LDRD program was created. DOE’s three large defense laboratories account for a majority of all LDRD spending. Most LDRD funding is invested in research supporting the laboratories’ strategic plans and maintaining the skills and competencies necessary to carry out laboratory missions. In addition, laboratory managers told us they believe that LDRD projects help to attract new scientists and encourage others to explore cutting-edge science projects in order to maintain the “vitality” of the laboratories. The managers believe that LDRD projects also help to identify new mission areas consistent with DOE’s overall mission. As shown in table 1, DOE’s nine multi-program national laboratories have spent over $2 billion on LDRD projects since fiscal year 1992. DOE’s three largest multi-program national laboratories—Lawrence Livermore in California, and Los Alamos and Sandia in New Mexico— account for nearly three-quarters of all LDRD spending. These laboratories concentrate on national security issues and in recent years spent near the maximum amount authorized by the Congress for LDRD projects—no more than 6 percent of their budgets, except for fiscal year 2000 when the Congress limited the amount to 4 percent. By contrast, DOE’s other laboratories generally spend less than 4 percent of their budgets on LDRD projects. (See table 2.) Each of the nine multi-program national laboratories established separate but similar LDRD categories of funding, using these as guides to selecting proposals. The number of categories ranged from one to five. In most laboratories, the largest category contained projects that aligned most closely with the laboratory’s strategic missions, such as the principal missions of national security at the defense laboratories and fundamental science at the Lawrence Berkeley National Laboratory. These types of LDRD projects tended to be larger and were expected to have nearer-term results. The second largest category was generally directed at building scientists’ skills and strengthening laboratory competencies. Generally, the laboratories target the smallest amount of funding to projects that are the highest risk and most cutting-edge as shown in the following examples: The Lawrence Livermore National Laboratory has three main categories of funding. Strategic Initiatives projects represent 27 percent of all LDRD funds, focus on research addressing national needs in support of the laboratory’s strategic vision, and are larger multidisciplinary projects. Exploratory Research projects received 67 percent of the funds, support the strategic vision and competencies building of programs and directorates across the laboratory, and are smaller than the Strategic Initiatives projects. Laboratory-wide projects received about 6 percent of the funds, are designed to encourage creativity of individual scientists in the pursuit of innovative research, and are funded at a maximum of $180,000. A category of funding that receives less than 1 percent of the laboratory’s LDRD funds— Definition and Feasibility Study projects—provides the seeds for new research ideas and are usually funded for less than 6 months and $50,000. The Los Alamos National Laboratory has two categories of LDRD projects. Directed Research projects received about two-thirds of the funds, support the laboratory’s strategic plan, are typically multidisciplinary, and generally cost $1 million or more. Exploratory Research projects received about one-third of LDRD funds, are usually smaller and the most innovative, and generally cost $250,000 or less. The Pacific Northwest National Laboratory has three categories for LDRD projects. Laboratory-level projects received about two-thirds of the laboratory’s LDRD funds and are for projects that directly align with the laboratory’s primary research areas, are generally multiyear and multidisciplinary, and cost from $100,000 to $250,000. Division- level projects received about one-third of the LDRD funds, are aimed at developing new ideas in a particular mission area, have intermediate and near-term mission relevance, and cost from $80,000 to $100,000. Level VI projects, which received a total of about $500,000 of the laboratory’s LDRD budget, are intended to support highly innovative ideas, and typically cost less than $60,000 each. DOE and laboratory officials believe that the innovative nature of LDRD projects helps attract new scientists who can contribute to maintaining the vitality of the laboratories. Those officials focusing on national security issues believe that the LDRD program helps attract scientists who can eventually perform national security research work. They believe that because nuclear weapons science is not taught in colleges and must be taught within the defense laboratories, LDRD projects—and the scientists they attract—are vital for national security in the long term. For example, postdoctoral students represent a major source of future research staff at the laboratories, and most of them are hired to work on LDRD projects. Sixty-two percent of Sandia’s postdoctoral staff hired between 1996 and 1999 worked on LDRD projects. DOE’s Laboratory Operations Board, comprising internal managers and external consultants, reported, in January 2000, that LDRD programs are vital in recruiting and retaining the best scientific talent into the laboratories. According to the Board’s report, from 1993 through 1998, 41 percent of LDRD-funded postdoctoral staff at Lawrence Livermore National Laboratory—a defense program laboratory—were subsequently hired by the laboratory. Officials from nondefense program laboratories also told us that LDRD projects are important for attracting and maintaining scientific talent in their laboratories. These laboratories, however, spend less on LDRD than defense program laboratories for a number of reasons, including that they conduct more basic science work as a primary mission within their regular programs. All of the randomly selected LDRD projects we reviewed at the five laboratories we visited met DOE’s guidelines for selection. Additionally, DOE’s and the laboratories’ management controls were adequate to reasonably ensure that approved projects would likely meet DOE’s project-selection guidelines. DOE’s guidelines specify that LDRD projects must be in the forefront of science and technology and should include at least one of the following: Advanced study of hypotheses, concepts, or innovative approaches to scientific or technical problems. Experimentation and analyses directed toward “proof of principle” or early determination of the utility of new scientific ideas, technical concepts, or devices. Conception and preliminary technical analyses of experimental facilities or devices. In addition, DOE’s guidelines generally require that LDRD projects should not last longer than 36 months, be supplemented by non-LDRD funds, be used to perform or supplement funding for DOE’s program work, or be used to fund construction for scientific projects beyond the preliminary phase of the research. All LDRD projects we reviewed met DOE’s guidelines. These projects were new projects that were proposed for fiscal year 2000 funding. Most of these projects tested or analyzed a new or untested concept and were consistent with the laboratory’s strategic missions, as shown in the following examples: A Los Alamos project has a goal of advancing the state of fundamental simulation theory so that sophisticated simulation tools can be developed for use in decision-making in complex national security environments, such as critical national infrastructure analysis and military engagements. The project involves developing complex integrated simulation tools that will advance fundamental research in the areas of mathematical foundations of simulation, issues in implementing and computing for large simulations, statistical methods for simulation-based studies, and principles for simulation-based assisted reasoning. The project’s results are primarily targeted to have relevance in mobile communications, regional population mobility and transportation infrastructure, electrical power distribution networks and markets, epidemiological impacts on populations, and threat identification and targeting in urban terrain. In the project’s first year, among other things, demonstrations will focus on mobile telecommunications, transportation systems, and epidemiological impacts. The project is being done under Los Alamos’ Directed Research category and supports the laboratory’s strategic goals in threat reduction, high-performance computing, and modeling and simulation. The project was proposed for 3 years; $600,000 was approved for first-year funding. An Argonne National Laboratory project is designed to fabricate magnetic wires of 20 nanometers (a nanometer is one-billionth of a meter) down to atom scale and study their static and dynamic magnetic properties. This project complements Argonne’s mission in the materials science area and could help define a new research direction for the laboratory. The ultimate goal is to create a new generation of miniaturization in electronics, including memories, transistors, logic elements, and sensors. The physical size of a magnetic system may affect its magnetic properties; this project proposes to study this phenomenon and make major inroads in understanding the fundamental issues of low-dimensional magnetic systems. These issues require a basic understanding of magnetic thin films and multilayers used in computing today as well as a deeper understanding of one- dimensional nanotechnology and synthesis of materials in this environment. The project managers plan to develop samples unprecedented in the study of lower-dimensional systems to better explore fundamental questions about the next-generation magnetism research. This project is being done under Argonne’s category of funding for more innovative projects—the Director’s Competitive Grants Program. The project was proposed for 2 years; $65,000 was approved for first-year funding. A Sandia National Laboratories project aims to develop new scientific tools for addressing the threat of biological terrorism, which is consistent with Sandia’s national security mission. Currently, the ability to initially detect people exposed to a released agent relies on the outward appearance of symptoms, such as lethargy and fever. The goal of this proposed LDRD project is to show that earlier detection, based on cellular-level changes in the body through blood analysis, could be accomplished. The project also aims to develop techniques and models to detect and analyze infection without waiting for external symptoms. The results could reduce disease detection time from days to hours. The development of a rapid, highly sensitive screening mechanism would also have widespread application in the fight against other infectious diseases. This 9-month project costs $100,000 and falls under Sandia’s Development Reserve category, which is used for urgent science and technology needs or technical work related to development of a new program. DOE and laboratory management controls were adequate to reasonably ensure that projects approved would likely meet DOE’s project-selection guidelines. The key controls in place included using DOE’s guidelines to control and conduct the project-selection process, using individuals in the review and selection process with the appropriate skills and knowledge to evaluate the proposed projects, substantially segregating duties among individuals to help ensure that no one individual is likely to control the project-selection decision in a way that will violate LDRD’s guidelines, and ensuring appropriate DOE oversight and review of the results of the process. All laboratories used DOE’s LDRD Order 413.2 as the primary guidance to review and select projects. Individuals involved in the review and selection of the projects had the requisite background and experience to provide credible review. Those individuals had wide-ranging scientific backgrounds—usually a Ph.D. in scientific research and practical experience in basic scientific research. When the subject matter of a project proposal was outside the knowledge base of the review team, the laboratories generally contracted with outside experts to provide reviews and recommendations on the merits of that proposal. In general, each laboratory established review panels comprising individuals from across the laboratory, which provided for diverse opinions to ensure that various points of view were brought to bear on the selection decision. In general, the review panels consisted of managers from directorates having knowledge in the project subject area, other subject matter experts, and managers from the LDRD program. Finally, DOE’s field offices, which are responsible for overseeing each laboratory, annually review the laboratories’ recommendations for projects to be funded and forward recommendations to headquarters for approval. While DOE’s reviews of proposed projects have resulted in clarifications and minor revisions in the proposals’ documentation, those reviews have rarely resulted in not funding proposed projects. All laboratories we reviewed have separate and somewhat different review and selection processes linked to their distinct categories of funding for LDRD projects, but key elements of these processes are very similar. For example, the laboratories we visited initiate their annual LDRD selection process by asking research staff to propose potential projects, called “calls for proposals.” These calls ask for proposals that generally fit into a particular category of funding in the LDRD program. Reviewers for the individual categories of funding review those proposals and either reach consensus or vote outright on where each proposal should be ranked in terms of recommending it for funding. That recommendation is then generally given to the laboratory director, who selects the projects to be funded. The projects recommended for funding are given to DOE’s field offices for review and comment and ultimately forwarded to DOE’s headquarters for approval. The LDRD program could improve its performance reporting. Each laboratory issues an annual LDRD report that includes performance reporting, but those reports do not use a common set of performance indicators. Additionally, the reports present performance information in varying formats, making it difficult to focus on the most relevant performance information. Laboratory managers told us there is no consensus on which performance indicators to use when reporting the results of their LDRD projects, nor is there an agreed upon reporting format. While the reports describe the accomplishments of individual laboratories, taken together, the laboratories’ reports do not provide aggregate performance information that DOE managers and the Congress could use to readily assess the overall value of the program. The different performance indicators reported in each of the laboratories’ annual LDRD reports make it difficult to readily assess overall program performance for DOE’s LDRD program. Table 3 provides a summary of the performance information included in the annual LDRD reports published by the nine multi-program national laboratories in our review and demonstrates the lack of uniformity in reporting the LDRD program’s results across the laboratory complex. In general, the laboratories maintain more detailed performance information than they report in their annual reports, but laboratory officials do not agree on a set of performance indicators that should be reported on for the program. Some pointed out that there is a significant difference between different types of publications. Refereed publications, for example, must go through an expert review process before they can be published. Also, certain publications have higher levels of difficulty and achievement and, therefore, significance. The same issue surrounds the tabulation of awards as performance indicators. Likewise, symposia, as well as other potential measures, carry different degrees of significance. Many suggested that success stories are the best measures of a project’s performance, particularly for basic research whose ultimate value may not be evident for a long time. Furthermore, they told us that projects viewed as unsuccessful with respect to their direct proposed goals might in fact have answered critical questions that paved the way for major breakthroughs in science. In addition, we found that differences in how performance information is presented in the laboratories’ annual LDRD reports also make it more difficult to assess the overall value of the program. As indicated in table 3, we found that while some laboratories present performance information for individual projects, other laboratories present performance information in a summary fashion. Two contrasting performance-reporting styles can be found in Sandia National Laboratories’ and Lawrence Livermore National Laboratory’s annual LDRD reports. Sandia’s report provides an appendix entitled “Project Performance Measures,” which lists LDRD projects and catalogues outputs of the projects using 11 quantitative performance indicators and several qualitative indicators. In contrast, Lawrence Livermore’s LDRD report provides an appendix listing publications resulting from individual LDRD projects and describes—in summary format rather than on a project-by-project basis—several other quantitative performance indicators, including patents, awards, and permanent staff hired. While the laboratories’ annual LDRD reports describe the accomplishments of individual laboratories, taken together, the laboratories’ reports do not provide aggregate performance information that DOE managers and the Congress could use to readily assess the overall value of the program. Aggregate, more-uniform performance reporting on the LDRD program could aid DOE managers, the Congress, and others in their oversight of the program. In general, LDRD project-selection and review processes in place at DOE’s multi-program national laboratories are adequate to reasonably ensure compliance with DOE’s project-selection guidelines. Our review of randomly selected LDRD projects at laboratories found that they met DOE’s guidelines. However, our observations of the performance- reporting practices for the LDRD program lead us to conclude that performance reporting for the program could improve. By reporting aggregate, more-uniform performance information for the LDRD program as a whole, DOE managers and the Congress could more readily assess the overall value of the program. To improve the Congress’s ability to make informed decisions on the value of the LDRD program, we recommend that the Secretary of Energy develop and annually report aggregate, more-uniform performance information for the LDRD program. This recommendation will require DOE’s National Nuclear Security Administration and the Office of Science, which are both accountable for laboratory performance, to work together and develop performance indicators that can be used to demonstrate accomplishments across all the laboratories. We provided a draft of this report to DOE for review and comment. According to representatives of the Office of Science responsible for the LDRD program, DOE agreed with our findings, conclusions, and recommendation. DOE also provided a number of clarifying comments, which we incorporated, as appropriate, in this report. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to the Secretary of Energy and the Director, Office of Management and Budget. We will also make copies available to others on request. To determine how much the Department of Energy’s (DOE) multi-program national laboratories have spent on Laboratory Directed Research and Development (LDRD) projects since 1992 (when the LDRD program was created), we reviewed program information, including annual reports, budgets and other financial information provided by DOE and laboratory officials for the nine DOE multi-program national laboratories. These laboratories are Argonne National Laboratory, Brookhaven National Laboratory, Idaho National Engineering and Environmental Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory, and Sandia National Laboratories. Although DOE’s Ames Laboratory has a LDRD program, we excluded it from our review because Ames is not a multi-program national laboratory. To determine if LDRD projects met DOE’s selection guidelines, we reviewed the procedures and processes for selecting LDRD projects at all nine of DOE’s multi-program national laboratories. We also tested the internal controls for project selection at five of those laboratories and the respective DOE offices responsible for oversight of the program, and randomly selected approved LDRD projects at those five laboratories. The five laboratories were Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories. These laboratories include three of the largest multi-program national laboratories and represent 83 percent of DOE’s LDRD expenditures for the period reviewed. We also interviewed DOE’s field office officials responsible for the oversight of the program in the Albuquerque, Chicago, Idaho, and Oakland Operations offices. In addition, we interviewed officials responsible for the LDRD program in DOE’s headquarters Office of Science, Office of Defense Programs, and Office of Environmental Management. To test the internal controls of the program, we evaluated the processes and procedures used to select LDRD projects. The internal control tests were designed to determine if adequate management control was built into the LDRD program to provide reasonable assurance that projects approved through the program comply with DOE’s guidelines for the LDRD program. We performed the internal control tests by examining the processes and procedures to ensure that the (1) people involved in the selection of the LDRD projects used the same guidance and selection criteria, (2) individuals involved in the selection of the projects had the appropriate skills and knowledge to evaluate the proposed projects, (3) duties in the project-selection process were segregated substantially among individuals so that no one individual would be likely to control the project-selection decision in a way that would violate LDRD guidelines, and (4) DOE oversight activities were adequate. To accomplish this, we obtained from the respective DOE officials and laboratory management officials, documentation and interview information on guidance provided to the LDRD project review and selection personnel on how to select LDRD projects for funding at each laboratory. We then obtained documentation on the LDRD processes and procedures for reviewing and selecting projects for funding at each of the five laboratories. This information included documentation on how proposed projects originate to final selections or other dispositions. We obtained documentation and interview information on which individuals participate in each phase of the process, their roles, and their backgrounds. Using random number tables, we selected five projects from each of the five selected multi-program national laboratories’ projects approved for funding for fiscal year 2000— a total of 25 projects. Because each laboratory had more than one category of LDRD funding and to enable us to review projects within each of those categories, we randomly selected at least one project from each category of funding at each laboratory. We determined if each selected project met DOE’s project-selection guidelines and, to complement our internal control tests, we examined the elements of the processes, qualifications of reviewers, segregation of duties, and DOE’s oversight. For each of these projects, we reviewed the project proposal files, including documentation reflecting individual reviewers’ recommendations on the disposition of each case. We also interviewed the scientists who proposed each project and the laboratory officials responsible for reviewing the projects for selection to better understand the technical nature of the research and how that research meets DOE’s guidelines for LDRD projects. Interviews with selection officials also focused on determining if individuals involved in the selection of the projects had the appropriate skills and knowledge to evaluate the proposed projects and if the duties in the process were segregated so that no one individual would be likely to control the project-selection decision in a way that would violate LDRD’s criteria. We also interviewed DOE officials in headquarters and the field offices involved in the oversight process through which the projects were selected. While we cannot project the results of our analysis of LDRD projects to the universe of those projects, our analysis provides a snapshot of how internal controls were being applied, and additional confidence, at the five selected laboratories, in the results of our internal control testing overall. To provide views on how the program might be improved, we relied on observations obtained throughout the course of our audit work. We provided a draft of this report to DOE for review and comment. According to representatives of the Office of Science responsible for the LDRD program, DOE agreed with our findings, conclusions, and recommendation. DOE also provided a number of clarifying comments, which we incorporated, as appropriate, in this report. Our review was performed from December 1999 through September 2001 in accordance with generally accepted government auditing standards.
The Department of Energy (DOE) created the Laboratory Directed Research and Development (LDRD) program in fiscal year 1992. This program formalized a long-standing policy of giving its multi-program national laboratories discretion to conduct self-initiated, independent research and development (R&D). Since then, DOE's multi-program national laboratories have spent more than $2 billion on LDRD projects. DOE's three largest multi-program national laboratories account for nearly three-quarters of laboratory-wide LDRD spending. All LDRD projects GAO reviewed at the five laboratories met DOE's guidelines for selection. In addition, each of the five laboratories created the internal controls necessary to reasonably ensure compliance with DOE's guidelines. Each laboratory issues annual LDRD reports that contain performance indicators, such as the numbers of patents obtained, publications, copyrights, awards, and relevance of the research to DOE's missions. The reports present performance information in various formats, making it difficult to focus on the most relevant performance information.
This background section describes (1) objectives, milestones, and management considerations in the B61-12 LEP and (2) DOE directives and NNSA policy letters and how they apply to programs such as the B61-12 LEP. The B61-12 consists of two major assemblies: the bomb assembly and the tail kit guidance assembly. NNSA manages the development and production of the bomb assembly and the Air Force manages the development and production of the tail kit assembly, among other activities, as follows: NNSA responsibilities and the bomb assembly. According to NNSA officials and documents, the bomb assembly will include reused, refurbished, and new nuclear and nonnuclear components. The design approach for the LEP maximizes reuse of existing nuclear and nonnuclear components and is intended to improve the safety and security of the weapon using proven technologies. NNSA manages the development and production of the bomb assembly under the direction of a federal program office and federal program manager located at Kirtland Air Force Base in Albuquerque, New Mexico, which is also the site of NNSA’s Sandia National Laboratories. NNSA sites and laboratories involved in the LEP include Sandia National Laboratories, the design agency for nonnuclear components, a production agency for some components, and system- level integrator of the overall weapon design; Los Alamos National Laboratory, the design and production agency for the nuclear explosive package; the Kansas City National Security Campus, the Y- 12 National Security Complex, and the Savannah River Site, the production agencies for various new or refurbished weapon components; and the Pantex Plant, where some bomb components are produced and final assembly of the bombs takes place. In addition, Lawrence Livermore National Laboratory provides independent review of Los Alamos National Laboratory’s work on nuclear components. As of September 2015, NNSA’s expected costs for its share of the LEP work were approximately $7.3 billion. Air Force responsibilities and the tail kit assembly. According to Air Force officials and documents, the tail kit assembly will provide the B61-12 with a guided freefall capability that improves the accuracy of weapon delivery. The guided capability will enable the weapon to meet military requirements with a lower nuclear yield, allowing for the use of less special nuclear material. The B61-12 is designed to be compatible with existing dual-capable aircraft—the F-15, F-16, and PA-200—as well as the B-2 strategic bomber and planned future aircraft such as the F-35 fighter. The Air Force’s responsibilities include integrating the B61-12 with its delivery aircraft and the operational flight program software. This software is being upgraded in the F-15 and B-2 delivery aircraft so that these aircraft can work with the B61-12’s digital interface. The Air Force Nuclear Weapons Center at Kirtland Air Force Base manages technical integration, system qualification, and other LEP-related tasks required to certify and field the weapon as well as tail kit acquisition, as contracted to Boeing. As of September 2015, the Air Force’s expected costs for its share of the LEP work were approximately $1.6 billion. Figure 1 shows the B61-12. The joint 6.X guidance describes key high-level joint tasks and deliverables for each phase of nuclear refurbishment activities such as an LEP. Specifically, the 6.X guidance lists key milestones, such as tests and cost estimates, that a nuclear weapon refurbishment activity must undertake before proceeding to subsequent steps of the Phase 6.X process (see fig. 2). NNSA and DOD implement the Phase 6.X process under a guidance document, Procedural Guideline for the Phase 6.X Process, which was issued in 2000 and is undergoing its first revision. This document describes the roles and functions of DOD, DOE, and NNSA in nuclear weapon refurbishment activities conducted through the Phase 6.X process. It also describes the roles and functions of two joint bodies that provide oversight and approval functions to LEPs and other nuclear weapons–related activities: the Nuclear Weapons Council and its Standing and Safety Committee. In addition, the Nuclear Weapons Council charters a Project Officers Group for each weapon system to provide a technical forum for weapon development and management activities. Each Project Officers Group is led by a project officer from either the Navy or Air Force, the two military services that maintain and operate nuclear weapons. Importantly, for more detailed requirements and guidance on program management matters, DOE and DOD each utilize their own agency-specific directives. In the B61-12 LEP’s current phase—6.3, development engineering— NNSA coordinates with the Air Force to conduct experiments, tests, and analyses to develop and validate the selected design option. Key steps that have not yet taken place in Phase 6.3 of the B61-12 LEP include formally developing a program cost baseline—a more mature cost estimate than is currently in use—and finalizing the design definition. Program officials told us they expect to issue the baseline cost report, which will formalize the program’s cost baseline, and to approve the baseline design in the third quarter of fiscal year 2016. According to program officials, the LEP is on schedule to enter Phase 6.4 (production engineering) in the fourth quarter of fiscal year 2016. The B61-12 LEP is one of several LEPs or refurbishments that NNSA and DOD have plans to undertake or have already started. Other LEPs or refurbishments include the ongoing LEP for the W76 warhead, an alteration to the W88 warhead, and planned LEPs for the cruise missile and interoperable warheads. Some of these activities are or will be taking place concurrently, and several have had their completion dates revised over the years, as shown in table 1. In addition, NNSA plans to move important production operations into new or modified facilities during this time period. Because of overlapping LEPs and new infrastructure, NNSA and DOD officials told us that they recognize the need to continue to improve coordination and management of the nuclear security enterprise. Earned value management is a project management tool developed by DOD in the 1960s to help managers monitor project risks. Earned value management systems measure the value of work accomplished in a given period and compare the measured value with the planned value of work scheduled for that period and the actual cost of work accomplished. Earned value management’s intended purpose is to integrate a project’s cost, schedule, and technical efforts for management and provide reliable data to decision makers. DOE’s Departmental Directives Program, defined and established through a DOE order, classifies directives into several types. These directive types include orders and guides, which the Departmental Directives Program describes as follows: Orders. Orders establish requirements and should include detailed instructions describing how requirements are to be implemented. Guides. Guides provide information on how to implement the requirements contained in orders. They are a nonmandatory means for complying with these requirements and cannot be made mandatory by reference in other DOE directives. The National Nuclear Security Administration Act, through which Congress established the NNSA, also gives the NNSA Administrator the authority to establish NNSA-specific policies, unless disapproved by the Secretary of Energy. NNSA does so through the issuance of policy letters. These policy letters take the form of NNSA policies, supplemental directives, and business operating procedures. DOE manages the B61-12 LEP as a program. The department makes distinctions between programs and projects and uses different directives to prescribe the management approach for each, as follows: Programs. According to NNSA officials, no DOE order exists that provides management requirements for program activities, such as LEPs. As we found previously in our November 2014 report on DOE and NNSA cost estimating practices, for example, DOE and NNSA programs were not required to meet any cost estimating best practices. NNSA officials stated at that time that NNSA cost estimating practices for programs were limited, decentralized, and inconsistent, and were not governed by a cost estimating policy or single set of NNSA requirements and guidance. According to these officials, each NNSA program office used different practices and procedures for the development of cost estimates that were included in the NNSA annual budget. Projects. NNSA’s management of projects is governed by DOE Order 413.3B (DOE’s project management order). The order applies to capital asset projects above a certain cost threshold. It provides management direction for NNSA and other DOE offices, with the goal of delivering projects within the original performance baseline that are fully capable of meeting mission performance and other requirements, such as environmental, safety, and health standards. The order specifies requirements that must be met, along with the documentation necessary, to move a project past major milestones. It provides requirements regarding cost estimating (and, in some cases, the preparation of an independent cost estimate), technology readiness assessments, independent project reviews, and the use of earned value management systems, among other requirements. As we have previously found, DOE’s project management order applies to programs only in conjunction with a program’s acquisition of capital assets. The B61-12 LEP’s program managers have developed a management approach that was then used to inform a new NNSA policy that applies to NNSA defense program management. In addition, NNSA and DOD have identified some potential management challenges in the program; the new management approach and policy may help NNSA address these challenges, but it is too soon to evaluate the likelihood that they will adversely affect the program. The B61-12 LEP’s program managers have developed, documented, and are using program management practices and tools for the LEP to help identify and avoid cost and schedule overruns and technical issues. As noted above, we have found in past reports that NNSA and DOD experienced program management challenges in LEPs, including the B61-12 LEP and the ongoing LEP for the W76 warhead. We made recommendations related to NNSA’s budget assumptions, cost tracking methods, and risk management plans and continue to monitor NNSA’s response. Since we issued those reports, the B61-12 LEP’s program managers developed a management approach for the program that draws on DOE directives and other sources, including our Cost Guide. For example, we found in our November 2014 report on NNSA project and program management that the B61-12 LEP’s managers used our Cost Guide, as well as direction under the Phase 6.X process and DOE’s project management order and cost guide, to develop their approach for developing cost estimates. Several officials from both NNSA and DOD characterized the B61-12 LEP’s overall program management approach as improved over the approaches used in previous LEPs. This approach includes the following practices and tools: Improved management capability and authority. The B61-12 LEP’s program office has taken steps to improve management capability and authority. According to NNSA officials, the LEP successfully requested that the department enlarge its federal program office staff to provide more management capability. Specifically, the program office had 3 full-time equivalent (FTE) staff at the beginning of the program; as of October 2015, it has 8 FTEs, augmented by contractor staff of about 12 FTEs, according to program officials. Moreover, since 2014, the federal program manager said that he has successfully requested contingency and management reserve funds of $983 million over the life of the program—about 13.5 percent of NNSA’s estimated $7.3 billion total project cost—which he has authority to use to help manage the effects of realized risks or changes in funding, such as a continuing resolution. An earned value management system. According to NNSA and DOD officials we interviewed, the B61-12 LEP is the first LEP to use earned value management, a tool that may help NNSA ensure that its work progresses on budget and on schedule. Each participating NNSA site is responsible for reporting earned value data monthly against the scope, schedule, and budget baselines established for each site’s activities. According to NNSA officials involved with the LEP, earned value management identifies schedule variances as they happen so that the program is aware of any work that may be progressing more slowly than expected and could go on to affect key milestones. Integrated master schedules. According to NNSA officials we interviewed, the B61-12 LEP is also the first NNSA defense program to summarize details from site schedules into a summary NNSA Integrated Master Schedule (NIMS) for work at the participating NNSA sites. The B61-12 LEP also has developed a top-level schedule that all program participants use, the Joint Integrated Master Schedule (JIMS). In past LEPs, according to the officials we interviewed, NNSA did not fully reconcile and integrate its individual sites’ schedules, which may have contributed to program delays and cost increases. The JIMS and other integrated master schedules are key tools in the B61-12 LEP’s schedule and risk management strategy, according to officials we interviewed. Integrated cost estimates. An official from NNSA’s Office of Cost Policy and Analysis told us that the B61-12 LEP is the first NNSA defense program to issue a cost estimate that integrates all participating sites’ costs into a single program cost estimate. In past LEPs, according to the official, NNSA did not integrate its individual sites’ cost estimates, which contributed to baseline costs being underestimated. Independent cost estimate. The official from NNSA’s Office of Cost Policy and Analysis told us that the office annually prepares and publishes an independent cost estimate for the NNSA portion of the B61-12 LEP to help inform the cost estimate prepared by the B61-12 LEP program manager. This estimate is prepared without reference to the program manager’s estimate and uses a different method. The estimate can then be compared to the program manager’s estimate to further refine it as the program develops the formal baseline cost estimate known as the baseline cost report, one of the key steps preceding the transition to Phase 6.4. As we found in our November 2014 report on NNSA cost estimating practices, having an independent entity conduct an independent cost estimate and compare it to a project team’s estimate provides an unbiased test of whether the project team’s cost estimate is reasonable. Technology and manufacturing readiness assessments. According to B61-12 program officials, the B61-12 program management team uses NNSA business practices to assess technology and manufacturing readiness levels for the LEP. The officials told us that all weapon components are maturing as planned with respect to their technology or manufacturing readiness. Officials also told us that they were conservatively applying NNSA business practices and noted that some components that have been in use for years may be assessed at lower readiness levels to account for other design changes in the B61-12. In addition, NNSA is in the process of planning for a technology readiness review later in 2015 by a group of Sandia National Laboratories experts that are not otherwise part of the B61-12 LEP. We have found in previous work that such independent peer reviews can identify important technology issues. Peer review of the nuclear explosive package. Nuclear weapons designers at Lawrence Livermore National Laboratory, which is not otherwise involved in the LEP, provide peer review of the nuclear explosive package components being designed at Los Alamos National Laboratory—a practice in keeping with past LEPs. Since the United States ceased nuclear explosive testing in 1992, DOE and NNSA have relied on, among other things, national laboratory peer reviews to help ensure the continued safety, reliability, and effectiveness of U.S. nuclear weapons without explosive testing. According to the official in charge of the peer review, no significant issues with the nuclear explosive package have emerged in the peer review. About 4 years after the B61-12 LEP began, NNSA incorporated elements of the B61-12 LEP management approach into a new policy for defense program management. Specifically, in August 2014, NNSA issued its Defense Programs Program Execution Guide (Program Execution Guide) regarding program management practices in NNSA defense programs, including LEPs. It applies to all ongoing and planned LEPs, including the B61-12 LEP. NNSA officials told us that the B61-12 LEP’s program management approach served as a model for many of the management practices and tools established in the Program Execution Guide. These practices include the use of earned value management systems, integrated master schedules, and risk management systems. NNSA officials told us that the name of the Program Execution Guide has created some confusion, and that it is not a DOE Guide—which provides nonmandatory means for meeting DOE requirements—but rather an NNSA policy letter. To allay the confusion, the officials told us, NNSA is in the process of renaming the document the Defense Programs Program Execution Instruction. As noted above, DOE does not have an order that provides requirements for the management of programs more generally; the Program Execution Guide applies only to those programs and projects managed by NNSA’s Office of Defense Programs. In undertaking its new management approach, NNSA has taken steps to address some of our prior recommendations, including drawing on our Cost Guide, but the B61-12 LEP still faces potential challenges regarding program management. The LEP’s new management approach, along with practices outlined in the Program Execution Guide, may help NNSA address these challenges, but it is too soon to evaluate how the challenges may affect the LEP. Potential challenges NNSA and DOD have identified include the following: Limited management capability and authority. According to an NNSA official, even with the increase in federal staff, NNSA needs two to three times more personnel in the federal program manager’s office to ensure sufficient federal management and oversight. As noted above, the NNSA federal program office employs about 20 people—8 federal FTEs and about 12 FTE-equivalent contractors—to manage NNSA activities. In contrast, the Air Force office employs about 80 federal FTEs and contractors to manage Air Force activities. In addition, the November 2014 report of the Congressional Advisory Panel on the Governance of the Nuclear Security Enterprise raised issues about the sufficiency of NNSA program managers’ authority. Specifically, the report states, “Although NNSA designates government program managers for each major program, their authorities have been very limited. Most importantly, they have lacked control over resources necessary to exercise needed leadership. In practice, they could more accurately be described as program coordinators than as program managers.” Similarly, in our March 2009 report, we found that NNSA’s program manager for the W76 LEP did not have sufficient authority over the construction or operation of a facility that was critical to the LEP, which played a role in resulting cost and schedule overruns. Untested earned value management. NNSA and DOD officials we interviewed noted that NNSA’s earned value management system will be useful only insofar as good data are entered into the system and the system is used to inform program management. We have similarly noted in our Cost Guide that using earned value management represents a culture change and requires sustained management interest, as well as properly qualified and trained staff to validate and interpret earned value data. According to the officials we interviewed, the system is too new for them to determine conclusively whether the data are accurate and the earned value management system is being used effectively. The officials said that work to validate the data in the system is ongoing, with formal reviews to assess the quality of the system planned for 2016. Cost estimating requirements and practices that have not followed best practices. In our November 2014 report, we found that NNSA defense programs generally, and the B61-12 LEP specifically, were not required to follow cost estimating best practices. For example, in that report, we found that the B61-12 LEP’s team- produced guidance for the program cost estimate did not stipulate that NNSA program managers or its contractors must follow any DOE or NNSA requirements or guidance to develop the B61-12 cost estimate. We recommended in the report that DOE revise its directives that apply to programs to require that DOE and NNSA and its contractors develop cost estimates in accordance with best practices. DOE agreed with this recommendation and, in June 2015, the Secretary of Energy issued a memorandum directing the heads of all department elements to use established methods and best practices, including practices identified in our Cost Guide, to develop, maintain, and document cost estimates for capital asset projects. We note that the memorandum pertains to departmental policy related to project management, not to program management. In the area of programs, NNSA officials described actions that it had begun taking to address our recommendation regarding cost estimating practices. We continue to monitor DOE’s response to our recommendation. Guidance on technology readiness that has not followed best practices. An NNSA business practice requires technology readiness reviews for LEPs, but it does not specify technology readiness requirements for entering into first production—Phase 6.5 of the 6.X process. Best practices followed by other federal agencies suggest and our prior recommendations state that new technologies should reach TRL 7—the level at which a prototype is demonstrated in an operational environment, has been integrated with other key supporting subsystems, and is expected to have only minor design changes—at the start of construction. In our November 2010 report, we recommended that the Secretary of Energy evaluate where DOE’s guidance for gauging the maturity of new technologies is inconsistent with best practices and, as appropriate, revise the guidance to be consistent with federal agency best practices. DOE generally agreed with our recommendation. Concerning projects, in his June 2015 memorandum, the Secretary of Energy directed that critical technologies should reach TRL 7 before major system projects— those with a total cost of greater than $750 million—receive approval of their performance baselines. Concerning programs, NNSA officials told us they recently issued technology readiness assessment guidance that was not used in the B61-12 LEP but will be used in the W80-4 and Interoperable Warhead LEPs. The new management approach that the B61-12 LEP’s program managers have implemented, along with the new Program Execution Guide, may help NNSA address the potential management challenges that NNSA officials and others have identified with previous LEPs, but it is too soon to determine whether this will be the case. NNSA and the Air Force have instituted a process to identify risks within the B61-12 LEP and develop plans to manage those risks. However, a constrained development and production schedule—driven by the aging of legacy B61 bombs and the need to start work on other LEPs, among other factors—could complicate risk management efforts. According to NNSA and Air Force officials, the B61-12 LEP risk analysis and management approach uses the program’s integrated master schedules in conjunction with a risk register, the Active Risk Manager database. Specifically, the LEP’s 48 product realization teams (PRT)—the groups of scientists, engineers, and subject-matter experts that perform the ground-level project work on B61-12 components and subassemblies—are responsible for identifying risks. Most risks are managed at the PRT level, but risks that have the potential to affect top- level schedule milestones or the program’s ability to deliver a weapon that meets performance requirements are presented to joint review boards for inclusion in the Active Risk Manager database. These higher-level risks— referred to as joint risks—are categorized according to the likelihood of their occurrence and the consequences should they occur. Joint risks with the highest likelihood and consequence are color coded as “red” risks, with successively lower-likelihood and –consequence risks labeled as “yellow” and “green,” respectively. Program officials develop risk management steps for each joint risk, document and make time for these steps in both the Active Risk Manager database and the relevant integrated master schedules, and brief the Nuclear Weapons Council on the status of joint risks. Additionally, NNSA identifies and documents opportunities—program areas with the potential to realize saved time and cost. According to NNSA and Air Force officials, some of the joint risks identified through this process have already been successfully managed. For example, NNSA officials told us that they were able to avoid the risk of a shortage of a type of glass necessary for electrical connections by procuring the glass for the entire program in advance. NNSA estimates that avoiding this risk prevented a program delay that could have lasted for more than a year and increased program costs by more than $2 million. Other joint risks may affect later stages of the B61-12 LEP, so it is too soon to tell if plans to manage them will be effective. Joint risks in the “red” category (i.e., high risk) include risks related to the compatibility of the B61-12 with the still-developing F-35 aircraft, the risk of temperature- related component failures in certain flight environments, and schedule risks related to the hydrodynamic testing of certain changed nonnuclear components. In November 2015, the federal program manager for the LEP reported that these risks remain “red” but are “trending” in a positive direction. Based on our discussions with program officials and our review of NNSA and Air Force documentation, the steps necessary to manage these and other risks will occur over several years. Complicating efforts to manage future LEP risks—especially if risks are realized or new ones materialize—is a constrained development and production schedule. NNSA and DOD officials acknowledged the schedule’s constraints, which they say are driven by factors including delays in starting the B61-12 LEP because of a lengthy design study, the effects of sequestration, and the need to complete work on the B61-12 LEP to enable NNSA to start work on planned future LEPs. In testimony given to the Strategic Forces Subcommittee of the Senate Committee on Armed Services in March 2015, the Nuclear Weapons Council characterized the B61-12 LEP’s schedule as having “little, if any, margin left.” DOD officials have testified before Congress that the B61-12 LEP must be completed on the current schedule to ensure that the aging of B61 legacy bombs does not affect the United States’ ability to maintain its commitments to NATO, and DOE officials have testified that the LEP must be completed to ensure that DOE can effectively manage other ongoing and planned LEPs and stockpile stewardship activities. These activities are set to intensify in the coming years. For example, according to NNSA documents, NNSA plans to execute at least four LEPs per year simultaneously in fiscal years 2021 through 2025—along with several major construction projects, including efforts to modernize NNSA’s uranium and plutonium capabilities. Figure 3 shows the schedules for NNSA’s planned LEPs and major alterations. Given NNSA’s past problems in executing LEPs, and a schedule with little room for delays, NNSA and Air Force may face challenges in the future in ensuring that risks are not realized and do not affect the program’s schedule, its cost, or the performance of the B61-12. We will continue to assess the B61-12 LEP as it passes through later stages of the Phase 6.X process, in keeping with the Senate report provision that gave rise to this report. We are not making new recommendations in this report. We provided a draft of this report to DOE and DOD for review and comment. In its written comments, reproduced in appendix II, DOE generally agreed with our findings. DOD did not provide formal comments. Both agencies provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Secretary of Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report assesses (1) the Department of Energy’s (DOE) National Nuclear Security Administration’s (NNSA) management approach for the B61-12 Life Extension Program (LEP) and (2) the extent to which NNSA and the Air Force are managing risks in the LEP. To assess NNSA’s management approach for the B61-12 LEP, we reviewed the program-developed documents that establish cost and schedule goals and track the program’s progress toward those goals. These documents included the program’s Joint Integrated Project Plan, Joint Top-level Schedule, Master Schedule, and Selected Acquisition Reports. In addition, we reviewed other program-developed guidance documents that the program management team prepared for the B61-12 LEP. These included the Integrated Phase Gate Implementation Plan, Project Controls System Description, Systems Engineering Plan, Quality Plan, and Configuration Management Plan. We also reviewed the Procedural Guideline for the Phase 6.X Process, which describes the roles and functions of DOE, the Department of Defense (DOD), and the Nuclear Weapons Council in nuclear weapon refurbishment activities such as the B61-12 LEP. In addition, we examined DOE and DOD directives and NNSA policy letters to understand departmental requirements for the management of the LEP. For DOE, these included DOE Order 251.1C, Departmental Directives Program; DOE Order 413.3B, Program and Project Management for the Acquisition of Capital Assets; DOE Guide 413.3-4A, Technology Readiness Assessment Guide; and NNSA’s Defense Programs Program Execution Guide. For DOD, these included DOD Instruction 5000.02, Operation of the Defense Acquisition System, and DOD Instruction 5030.55, DoD Procedures for Joint DoD-DOE Nuclear Weapons Life-Cycle Activities. For information on the management of the B61-12 LEP in the broader context of joint DOE-DOD stockpile stewardship activities, we also reviewed documents such as DOD’s Nuclear Posture Review Report of 2010 and DOE’s Stockpile Stewardship and Management Plan. To assess the extent to which NNSA and the Air Force are managing risks in the LEP, we reviewed the documents described above. In addition, we visited NNSA’s Sandia National Laboratories and Los Alamos National Laboratory to view systems that track project activities, cost and schedule information, and the execution of risk management steps, as well as to meet program officials responsible for the design and production of the B61-12 and see some of the components under development. The systems we reviewed included the B61-12 LEP’s Active Risk Manager database and the systems holding classified elements of project plans and schedules. For both objectives, in the course of our site visits to the laboratories named above, we interviewed federal officials and contractors involved with the B61-12 LEP. We also interviewed officials in NNSA offices responsible for providing guidance, oversight, and program review for the B61-12 LEP and other such defense programs. For criteria and context, we used the GAO Cost Estimating and Assessment Guide and our past reports on LEPs and NNSA cost estimating practices. Throughout our work, we coordinated with a team from DOE’s Office of Inspector General, which is conducting its own review of the B61-12 LEP and plans to issue a classified report. We conducted this work from July 2014 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Jonathan Gill (Assistant Director), Antoinette C. Capaccio, Penney Harwell Caramia, Pamela Davidson, Dan Feehan, Alex Galuten, Jennifer Gould, Rob Grace, Alison O’Neill, Tim Persons, Ron Schwenn, Sara Sullivan, and Kiki Theodoropoulos made key contributions to this report.
Weapons in the U.S. nuclear stockpile are aging. NNSA and DOD undertake LEPs to refurbish or replace nuclear weapons' aging components. In 2010, they began an LEP to consolidate four versions of a legacy nuclear weapon, the B61 bomb, into a bomb called the B61-12 (see fig.). NNSA and DOD have stated they must complete this LEP by 2024 to uphold U.S. commitments to the North Atlantic Treaty Organization. As of September 2015, NNSA and DOD estimated that the B61-12 LEP would cost about $8.9 billion. Senate Report 113-44 included a provision for GAO to periodically assess the status of the B61-12 LEP. This report assesses (1) NNSA's management approach for the B61-12 LEP and (2) the extent to which NNSA and the Air Force are managing risks in the LEP. GAO reviewed project plans, schedules, management plans, and other documents and program data, and visited the two NNSA national laboratories—Sandia and Los Alamos—that serve as the design agencies for the LEP. The B61-12 life extension program's (LEP) managers have developed a management approach that officials from the Department of Energy's (DOE) National Nuclear Security Administration (NNSA) and the Department of Defense (DOD) regard as improved over the management approach used for past LEPs, which experienced schedule delays and cost overruns. Among other things, the B61-12 LEP is the first LEP to use earned value management, a tool that measures the planned versus actual value of work accomplished in a given period, which may help NNSA ensure that work progresses on budget and on schedule. It is also the first LEP to integrate the schedules and cost estimates for activities at all participating NNSA sites. NNSA used this new approach to inform its first Program Execution Guide for defense programs, issued in August 2014, which applies to all NNSA defense programs. NNSA's new management approach notwithstanding, the B61-12 LEP faces ongoing management challenges in some areas, including staff shortfalls and an earned value management system that has yet to be tested. The new management approach may help the LEP address these potential challenges, but it is too soon to determine whether this will be the case. To manage risks in the B61-12 LEP, NNSA and the Air Force use a risk management database and integrated schedules to categorize risks and incorporate risk management steps in the schedules. According to NNSA and Air Force officials, some risks have already been managed in this manner. For example, NNSA estimates that making a needed material procurement in advance prevented a potential delay of more than a year and a potential cost increase of more than $2 million. Remaining risks include the risk that components may fail in certain flight environments and risks related to testing of certain nonnuclear components. NNSA is also working to ensure future compatibility with the F-35 aircraft. NNSA and Air Force officials said they will not know for several years whether steps planned to manage these risks are adequate. A constrained development and production schedule—which DOE's and DOD's Nuclear Weapons Council characterized as having “little, if any, margin left”—complicates efforts to manage risks. Factors constraining the schedule include the aging of components in current versions of the B61, delays in starting the B61-12 LEP because of a lengthy design study, the effects of sequestration, and the need to complete the B61-12 LEP so that NNSA can begin other planned LEPs. GAO will continue to monitor these issues as it assesses the LEP in later stages. GAO is making no new recommendations but discusses the status of prior GAO recommendations in this report. In commenting on a draft of this report, DOE generally agreed with GAO's findings and provided technical comments that were incorporated, as appropriate. DOD provided technical comments that were also incorporated, as appropriate.
The concept of establishing a position to integrate management functions within federal departments can be traced back to the first Hoover Commission, which was charged by Congress with reviewing and recommending ways to improve the organization and operation of federal agencies. The commission, which lasted from 1947 to 1949, proposed numerous recommendations to strengthen departmental management leadership, including the creation through statute of the position of assistant secretary for administration in each executive department. This senior-level official was to be selected from the career civil service and would direct crosscutting administrative activities, such as budget, finance, human resources, procurement, management analysis, and support services. The commission’s recommendation was subsequently adopted and these assistant secretaries for administration, positions filled by career appointees, were established in many of the executive departments throughout the 1950s and 1960s. The more recent concept of the COO/CMO position largely came out of the creation of performance-based organizations (PBO) in the federal government in the late 1990s and early 2000. During that time, the administration and Congress renewed their focus on the need to restructure federal agencies and hold them accountable for achieving program results. To this end, three PBOs were established, which were modeled after the United Kingdom’s executive agencies. A PBO is a discrete departmental unit that is intended to transform the delivery of public services by having the organization commit to achieving specific measurable goals with targets for improvement in exchange for being allowed to operate without the constraints of certain rules and regulations to achieve these targets. The clearly defined performance goals are to be coupled with direct ties between the achievement of the goals and the pay and tenure of the head of the PBO, often referred to as the COO. The COO is appointed for a set term of typically 3 to 5 years, subject to an annual performance agreement, and is eligible for bonuses for improved organizational performance. With the backdrop of these PBOs and an ongoing focus on transforming organizational cultures in the federal government, the Comptroller General convened a roundtable of government leaders and management experts on September 9, 2002, to discuss the COO concept and how it might apply within selected federal departments and agencies. The intent of the roundtable was to generate ideas and to engage in an open dialogue on the possible application of the COO concept to selected federal departments and agencies. The participants at the roundtable offered a wide range of suggestions for consideration as the executive branch and Congress were seeking to address the federal government’s long-standing management problems and the need to move to a more responsive, results-oriented, and accountable federal government. Nonetheless, there was general agreement on the importance of the following actions for organizational transformation and management reform: Elevate attention on management issues and transformational change. Top leadership attention is essential to overcome organizations’ natural resistance to change, marshal the resources needed to implement change, and build and maintain the organizationwide commitment to new ways of doing business. Integrate various key management and transformation efforts. There needs to be a single point within agencies with the perspective and responsibility—as well as authority—to ensure the successful implementation of functional management and, if appropriate, transformational change efforts. Institutionalize accountability for addressing management issues and leading transformational change. The management weaknesses in some agencies are deeply entrenched and long-standing, and it can take at least 5 to 7 years of sustained attention and continuity to fully implement transformations and change management initiatives. Still, it was generally agreed at this roundtable discussion that the implementation of any approach should be determined within the context of the specific facts and circumstances that relate to each individual agency. In the time since the 2002 roundtable, the COO concept has evolved into the COO/CMO concept with a focus on business transformation, and has received even greater attention within the federal government. Legislative proposals have been introduced in Congress to establish CMO positions at DOD and DHS to help address transformation efforts at the two departments, both of which are responsible for various areas identified on our biennial update of high-risk programs. These legislative proposals differ somewhat in content but would essentially create a senior-level position to serve as a principal advisor to the secretary on matters related to the management of the department, including management integration and business transformation. Some of these legislative proposals also include specific provisions that spell out qualifications for the position, require a performance contract, and provide for a term appointment of 5 or 7 years. In August 2007, the proposal to create a CMO in DHS at an Executive Level II, but without a term appointment, was enacted into law. In 2000, Congress created a Deputy Secretary for Management and Resources position at the Department of State; however, the administration opposed the creation of a second deputy position, and the position has never been filled. Therefore, at the present time, no federal department has a COO/CMO-type position with all these characteristics. However, the heads of federal departments and selected agencies designate a COO, who is usually the deputy or another official with agencywide authority, to sit on the President’s Management Council. The council was created by President Clinton in 1993 in order to advise and assist the President and Vice President in ensuring that management reforms are implemented throughout the executive branch. The Deputy Director for Management of OMB chairs the council, and the council is responsible for improving overall executive branch management, including implementation of the President’s Management Agenda (PMA); coordinating management-related efforts to improve government throughout the executive branch and, as necessary, resolving specific interagency management issues; ensuring the adoption of new management practices in agencies throughout the executive branch; and identifying examples of, and providing mechanisms for, interagency exchange of information about best management practices. Ascertaining which criteria might be relevant for a particular agency would assist in determining the type of COO/CMO position that might best be established in the agency. The following is a summary of five criteria that can be used to determine the appropriate type of COO/CMO position in a federal agency. This summary includes various statements and examples provided by the officials we interviewed and the forum participants, along with relevant references to our previous work. Agencies that have long-standing management weaknesses and high-risk operations or functions could be good candidates for establishing a COO/CMO-type position. Agencies with programs and functions that we designate as high risk, like DOD, would be especially appropriate candidates for such positions. Our interviews with officials at the four case-study organizations reinforced that an agency’s overall performance should be considered when assessing the type of COO/CMO that might be needed. For example, an official in one of the agencies commented that a COO/CMO position might be needed if an agency has a high degree of material and financial weaknesses. Another agency official said that an additional factor to consider is whether the organization has had many large projects fail, a likely indicator that the agency has not placed sufficient attention on integration. In a discussion of the importance of establishing measures to assess organizational performance, a department official commented that the integration of management functions is often not measured within federal agencies and that in order for full integration to occur, it must be stimulated and given a timeline. We have previously suggested that agencies engaged in major transformation efforts and those agencies experiencing particularly significant challenges in integrating disparate organizational cultures, such as DHS, could be also good candidates for having COO/CMO-type positions in place. Our interviews with officials at the case-study organizations confirmed that the degree of organizational change needed should be a criterion to consider when assessing the need for a COO/CMO. For example, an agency official we interviewed commented that an agency undergoing significant transformation might benefit from a COO/CMO position in place in order to focus principally on correcting weaknesses and exploring new approaches for meeting mission needs. Another agency official pointed out that the organizational culture of the agency should be considered, and he noted that a strong esprit de corps in an agency could affect the decision of whether a COO/CMO position is advisable. As we have previously reported, overcoming inertia and cultural resistance to change can be a significant challenge within agencies. The nature and complexity of mission, including the range, risk, and scope of an agency’s mission, is another factor that should be considered in the assessment for a COO/CMO position. For example, a department official we interviewed said that the complexity of an agency’s mission should be considered when assessing the need for a COO/CMO, regardless of the size of the agency. Another agency official commented that an organization with a single mission focus might not need a COO/CMO position. A forum participant noted that implementing change at an organization such as DHS can be challenging because the department does not have one single mission (i.e., emergency and nonemergency operations). In suggesting that a wide range of organizational missions should be a factor when considering the type of COO/CMO, a departmental official we interviewed pointed out that Treasury manufactures currency, collects taxes, manages the national debt, and provides the Director of National Intelligence with information on terrorist financing activities. Officials frequently cited the size of an organization as an important factor to consider when reviewing the type of COO/CMO position. For example, a case-study official suggested that a COO/CMO position would not be necessary in an organization with only 50 people whereas an organization with 2,000 employees could need such a position to oversee and integrate the management functions. He said that as organizations become larger, they are more likely to need coordinating structures to help with integration and coordination because communication can easily break down. Another official added that a COO/CMO position might work best for a large decentralized organization, where it is more difficult to enforce policy and where there is no entity to oversee and integrate the various functions. Some forum participants concluded that for smaller agencies, the deputy could carry out the COO/CMO role. Another case-study official remarked that a COO/CMO-type position might be relevant for a smaller organization if there were a high degree of risk and grave consequences for poor communication and coordination, such as with the National Aeronautics and Space Administration. However, another department official suggested that the size of the organization might not be highly relevant when considering the establishment of a COO/CMO position because every agency needs to have a consolidation point in the flow of information to minimize disjointed communication and a lack of coordination. Organizational structure was also suggested by officials as a factor to consider in determining the type of COO/CMO position. For example, a department official suggested that a COO/CMO position should be established in agencies with a wide geographic dispersion of personnel and facilities. Another agency official commented that an additional factor to consider is the degree to which the organization’s activities are duplicative or stovepiped. Still another official offered that the number of management layers in the organization and the existing span of control for managers should be a factor in assessing the type of COO/CMO. The types of reporting relationships and the number of dotted lines of authority on the organizational chart might also give indications about the need for a COO/CMO position, as cited by another agency official we interviewed. Another important factor to consider is the extent of knowledge and experience and the level of focus and attention of existing senior leadership. For example, an agency official we interviewed remarked that if there has not been sufficient attention and focus on management issues to accomplish the mission of the organization, then establishing a COO/CMO position would add value. Some forum participants noted that management execution and integration require a long-term focus, and that under the existing system, agency senior leaders may not likely stay in their positions for the long term. According to another official we interviewed, an additional factor to consider is the extent to which the agency has a large number of noncareer positions (e.g., political appointees) carrying out management roles. A key thread of discussion at the Comptroller General’s April 2007 forum was the possible need for different types of COO/CMO positions based on whether the position is predominately a transformational role in instituting new processes and organizational culture change or an operational role in a “steady state” organization. Depending on these five criteria, there could be several types of COO/CMO positions, including the types shown below. The existing deputy position could carry out the integration and business transformation role. This type of COO/CMO might be appropriate in a relatively stable or small organization. A senior-level executive who reports to the deputy, such as a principal under secretary for management, could be designated to integrate key management functions and lead business transformation efforts in the agency. This type of COO/CMO might be appropriate for a larger organization. A second deputy position could be created to bring strong focus to the integration and business transformation of the agency, while the other deputy position would be responsible for leading the operational policy and mission-related functions of the agency. For a large and complex organization undergoing a significant transformation to reform long- standing management problems, this might be the most appropriate type of COO/CMO. A number of forum participants and officials we interviewed, including OMB’s Deputy Director for Management, said that the deputy position should generally carry out the role of integrating key management functions and transformational efforts in agencies rather than establishing a separate COO/CMO position. At the same time, given the competing demands on deputy secretaries in executive branch departments across the federal government to help execute the President’s policy and program agendas, a number of agency officials argued that it is not practical to expect that the deputy secretaries will be able to consistently undertake this vital integrating responsibility. Moreover, while many deputy secretaries may be appointed based in part on their managerial experience, it has not always been the case, and not surprisingly, the management skills, expertise, and interests of the deputy secretaries have always varied and will continue to vary. Then again, some officials we interviewed maintained that a COO/CMO position would be appropriate for any federal department or agency because there is always a need to integrate management functions and ensure collaboration in new initiatives. We identified six key strategies that agencies should consider when implementing COO/CMO positions. In these six strategies, we recognize and forum participants underscored that the best approach to use in any given agency should be determined within the context of the specific facts and circumstances surrounding that agency and its own challenges and opportunities. The following is a more detailed discussion of these strategies along with a range of related insights, views, and examples that we identified. In previous reports, we have proposed that the COO/CMO position would serve as a single organizational focus for key management functions, such as human capital, financial management, information resources management, and acquisition management, as well as for selected organizational transformation initiatives. By their very nature, the problems and challenges facing agencies are crosscutting and hence require coordinated and integrated solutions. Thus, the COO/CMO essentially serves as a bridge between the agency head, functional chiefs, and mission-focused executives. The COO/CMO provides leadership and vision, bringing greater integration and increased attention to the agency’s management functions in order to enable agency employees to accomplish their missions more efficiently and effectively. The COO/CMO would offer the benefit of increased opportunities to coordinate and identify crosscutting issues that are fundamental to effectively executing any administration’s program agenda yet do not generally entail program policy-setting authority. The COO/CMO would also bolster the agency’s efforts to overcome the natural resistance to change, challenging conventional approaches and developing new methods and systems for implementing business transformation in a comprehensive, ongoing, and integrated manner. We have previously suggested that in crafting an approach for any specific agency, Congress could make clear in statute the broad responsibilities for the senior official tasked with management integration and business transformation. Congress has taken this approach with other similar senior-level positions that can serve as illustrative models. For example, in 2003 Congress created the position of Deputy Architect of the Capitol/COO, responsible for the overall direction, operation, and management of that organization. Under the statute, besides developing and implementing a long-term strategic plan, the Deputy Architect/COO is to propose organizational changes and staffing needed to carry out the organization’s mission and strategic and annual performance goals. In addition, Congress has articulated positional responsibilities in important governmentwide management legislation. For example, the Chief Financial Officers Act of 1990 (CFO Act), which requires 24 federal agencies to have CFOs, clearly lays out the CFOs’ responsibilities, including developing and maintaining integrated accounting and financial management systems; directing, managing, and providing policy guidance and oversight of all financial management personnel, activities, and operations; and approving and managing financial management systems design and enhancement projects. By establishing such responsibilities in statute, Congress created clear expectations for the positions and underscored its desire for employing a professional and nonpartisan approach in connection with these positions. (App. III provides a summary of the key responsibilities for statutory chief officer positions in the federal government.) Each of the four organizations in our study—Treasury, IRS, Justice, and MIT—has a senior-level official responsible for integrating the key management functions of human capital, financial management, information resources management, and acquisition management. Examples of other functional responsibilities of the case-study COO/CMOs include strategic planning, program evaluation, facilities and installations, and safety and security. The COO/CMOs of the four case-study organizations are also directly responsible for leading many of the business transformation efforts in their respective organizations. At IRS, for example, both the COO/CMO and the senior executive of the mission side of the agency are heavily involved in managing change efforts, but the COO/CMO has primary responsibility for spearheading business transformation initiatives that cut across mission-support programs and policies. The case-study officials we interviewed and the participants of the April 2007 forum generally agreed that a senior-level official should be responsible for carrying out the COO/CMO role of integrating key management functions in the organization. For example, an official from one of the federal agencies noted that without someone in the agency devoted to management functions, the focus of the agency’s senior leaders will remain on the policy side of the agency. One of the COO/CMOs of the four organizations commented that there is a benefit in having the mission- support activities in an organization grouped together under one senior leader so as to support the common interests of these mission-support activities. Another COO/CMO told us that his role was to “make life easier” for the mission side of the organization. Another official echoed these sentiments in that the COO/CMO needs to be viewed by the mission side of the organization as adding value as opposed to simply promulgating rules. Several case-study officials and forum participants also stressed that the COO/CMO must have an authoritative role in the overseeing the agency’s budget in order to be effective in the position. The roles and responsibilities of the COO/CMO related to business transformation were also widely discussed in our case-study interviews and at the forum. For example, a forum participant said that the senior official leading transformation within an agency needs to be in an operational role rather than a policy role. Another forum participant stressed that although the COO/CMO is a management and transformational position, the roles and responsibilities of the position can differ depending on the extent to which the agency is undergoing transformation. Accordingly, when significant transformation is the goal, the role of the COO/CMO should be focused on breakthrough improvements to achieve this goal. The COO/CMO at one case-study agency said that when organizations carry out these transformation efforts, managers throughout the organization will often try to accelerate decision making and the execution of change, which can be quite detrimental. He noted that in order to prevent these types of problems, a federal agency needs the COO/CMO with a role and associated responsibilities that allow for directing the speed of change implementation while also controlling the level of detail and personal involvement in the change. The COO/CMO at another agency remarked that in order for an agency to be successful in carrying out any transformation process, experienced agency managers need to be involved at the beginning of the process and thus the roles and responsibilities of the COO/CMO should complement those of other managers in the agency. Several agency officials and forum participants told us that it is also important to avoid being overly restrictive in specifying the roles and responsibilities for the COO/CMO position. For example, a forum participant said that Congress should not legislate details of how to carry out the responsibilities of a COO/CMO position because legislation is geared to the present whereas the agency and the environment in which it operates can change over time. Another forum participant echoed that any legislation to establish a COO/CMO position should not contain detailed roles and responsibilities because it could hinder effectiveness in the position. Another forum participant added that the roles and responsibilities should be broadly defined, allowing flexibility from agency to agency. Another forum participant suggested that the agency head could specify the responsibilities of a COO/CMO in formal terms, such as in a “tasking memo.” Nonetheless, a number of agency officials we interviewed stressed the importance of communicating to employees throughout the agency the specifics of the COO/CMO’s actual role in the organization. We have previously noted the importance of ensuring that all agency employees are fully aware of the duties and key areas of responsibilities for executives in charge of major activities or functions in the agency. The COO/CMO concept is consistent with the governance principle that there needs to be a single point within agencies with the perspective and responsibility to ensure the successful implementation of functional management and business transformation efforts. The organizational level and span of control of the COO/CMO position is crucial in ensuring the incumbent’s authority and status within the organization. We have previously argued that the COO/CMO position should be part of an agency’s top leadership, for example, a deputy secretary for management. At the same time, however, the placement of the COO/CMO position needs to take into account existing positions and responsibilities to avoid additional layers of management that are unnecessary. Regardless of how the position is structured in an agency, it is critical that the individuals appointed to these positions be vested with sufficient authority to be able to integrate management functions and achieve results. For the four organizations included in our review, the COO/CMOs either reported to the organization head (i.e., second-level reporting position) or reported to an individual who reports to the organization head (i.e., third- level reporting position). Specifically, the IRS COO/CMO reports to the Commissioner of Internal Revenue, the MIT COO/CMO reports to the President of the university, and the COO/CMOs at Treasury and Justice report to the respective deputy positions in those departments. (See fig. 1 for simplified organizational charts showing the reporting relationships of the four COO/CMO positions.) The COO/CMOs for the four organizations told us that they had the necessary and appropriate level of authority at their respective levels within their organizations. The case-study officials and the forum participants broadly recognized that a COO/CMO should have a high enough level of authority to ensure the successful implementation of functional management and transformational change efforts in the agency. However, the officials and participants had mixed views as to the most appropriate organizational level for a COO/CMO position. Some interviewees and forum participants told us that the COO/CMO position should report to the head of the agency (i.e., second-level reporting relationship). A department official said, for example, that having a COO/CMO position on par with the deputy secretary would demonstrate that management issues are viewed as important in the agency. Another agency official commented that a COO/CMO reporting to the agency head would more likely be involved in key decision making within the organization. Still other interviewees and forum participants said that the COO/CMO should report to an individual who reports to the organization head (i.e., third-level reporting relationship). For example, a department official told us that the COO/CMO should be at the under secretary level in any department, yet stressed that the organizational level itself would not guarantee success in the COO/CMO position. A forum participant said that a COO/CMO position should be placed at a high level within the organization, but cautioned that a COO/CMO position with a deputy secretary as peer would create confusion within the organization if responsibility and accountability are not clearly defined. Some of the agency officials and forum participants said that the COO/CMO’s level on an organizational chart is not as critical as the level of authority and executive-level attention that is given to the COO/CMO position. For example, a department official told us that regardless of where the COO/CMO position is placed on the organizational chart, the COO/CMO must have a close relationship with and be a trusted advisor to the agency leadership. Another official added that the effectiveness of a COO/CMO does not always depend on where he or she is on the organizational chart, but mostly on the personality and abilities of the individual. A forum participant commented that the reporting relationship of the COO/CMO should depend primarily on the agency’s agenda and mission. He said, for instance, that if the agency is focused on multiple issues and there are transformational initiatives under way, dual deputies are needed (i.e., similar to the IRS and MIT models of governance). Additionally, some officials we interviewed commented on COO/CMO positions in connection with the relationship between departments and their component agencies. For example, an official at one of the case- study agencies suggested that the reporting level of the COO/CMO position could differ depending on whether the position is in a department or a bureau. Namely, the COO/CMO in a department might report to the deputy while the COO/CMO at the bureau level could report directly to the bureau head. This official noted that at the bureau level, senior management is typically more geared toward operations rather than policy. Another official suggested the possibility of having a COO/CMO position at each of the various bureaus of a department, which would then form a team of individuals led by the department’s COO/CMO to integrate management functions and business transformation throughout the department. An important issue to consider when implementing the COO/CMO position is the reporting relationships of the statutory management functional chiefs, namely the CFO, CIO, CHCO, and CAO. Some of these positions are required by statute to report directly to their agency heads; in other cases, no direction is provided in statute. However, these functional management chiefs could report to a COO/CMO who was given the responsibility for integrating the organization’s management functions. For many large federal departments and agencies, such an arrangement would likely require amending existing legislation, for example, the CFO Act. This arrangement would need careful analysis to ensure that any legislative changes result in augmented attention to management issues yet do not inadvertently lead to a reduction in the authority of key management officials or the prominence afforded a particular management function. Although federal law generally requires that CFOs and CIOs report directly to their agency heads, this reporting relationship does not always happen in practice. For example, in July 2004, we reported on the status of CIO roles, responsibilities, and challenges (among other things) at 27 major agencies. Nineteen of the CIOs in our review stated that they reported directly to the agency head in carrying out their responsibilities. In the other 8 agencies, the CIOs stated that they reported instead to another senior official, such as a deputy secretary, under secretary, or assistant secretary. In addition, 8 of the 19 CIOs who said they had a direct reporting relationship with the agency head noted that they also reported to another senior executive, usually the deputy secretary or under secretary for management, on an operational basis. Only about a third of those who did not report to their agency heads expressed a concern with their reporting relationships. For the July 2004 report, we also held two panels of former agency senior executives responsible for information technology who had various views on whether it was important that the CIO report to the agency head. For example, one former executive stated that such a reporting relationship was extremely important, another emphasized that organizational placement was not important if the CIO had credibility, and others suggested that the CIO could be effective while reporting to a COO. Unlike for CFOs and CIOs, the reporting relationships of CHCOs and CAOs are not prescribed in federal statute and are at the discretion of the agency head. In May 2004, we provided information on the existing reporting relationships of the CHCOs as part of our review of federal agencies’ implementation of the Chief Human Capital Officers Act of 2002. At that time, we noted that more than half (15 of 24) of the CHCOs reported directly to the agency head, with the remainder reporting to another agency official. Some CHCOs who reported directly to the agency head told us that this reporting relationship gives them an important “seat at the table” where key decisions are made. However, some CHCOs who did not report to their agency head said having all or most of the agency chief management positions as direct reports to the agency heads may impede efficient management coordination within the agency. Most of the political appointees (9 of 12) reported directly to the agency head, while half of the career executives (6 of 12) reported to another agency official. Many of the officials we interviewed from the case-study organizations told us that the management functional chiefs should report directly to a COO/CMO, otherwise the COO/CMO would not have the level of authority needed to ensure the successful implementation of functional management and transformational change efforts in the agency. An agency official pointed out, for example, that one of the purposes of integrating functions within an agency is to avoid having everyone report directly to the agency head. Some interviewees raised concerns about where a COO/CMO position might be created in the agency and the resulting changes in the level of authority and reporting relationships related to the functional management chiefs. For example, an official at one of the case- study agencies said that if a COO/CMO position were established in an agency and this change in effect resulted in moving the functional management chiefs down a level on the organizational chart, some functional chiefs might view this change as a demotion because they would no longer have a direct line to the deputy. Another official maintained that the COO/CMO position should report to the agency head in part because the agency could have morale or recruitment problems within the functional chief positions if the COO/CMO were at a third level on the organization chart and the functional chiefs reported to him or her. Effective working relationships of the COO/CMO with the agency head and his or her peers are essential to the success of the COO/CMO position. In various reports over the years, we have stressed the importance of good working relationships to achieving program goals and agency missions. As with other senior-level officials in agencies, individuals serving in COO/CMO positions can establish effective working relationships through various methods, such as forming alliances with other senior managers to help build commitment and getting managers from the mission side of the enterprise involved and accountable for key management projects. We have also previously noted that active participation in executive processes and committees facilitates the ability to build effective executive-level working relationships. Because of high turnover among politically appointed leaders, it is particularly important for appointees and senior career executives to develop good working relationships from the beginning. At the four case-study organizations, working relationships among the COO/CMOs and other senior leaders were crucial to effectively carrying out the respective integration and transformation roles. For example, in May 2003, the then-Commissioner of Internal Revenue realigned IRS’s management structure with the primary change being the creation of an operations support organization to be led by a deputy commissioner serving in a COO/CMO-type role. This new position, the Deputy Commissioner for Operations Support, would be responsible for the modernization program and drive productivity across the organization. The other deputy—the Deputy Commissioner for Services and Enforcement—would be able to focus on the mission side of the agency, including prioritization of multiple enforcement initiatives and reducing cycle time for enforcement actions. Officials at IRS stressed the importance of the working relationship between the agency’s two deputy commissioners—one serving as the COO/CMO— in carrying out their respective roles and responsibilities in leading the mission and mission- support offices of the agency. According to IRS officials we interviewed, open communication and carefully planned coordination between the mission and mission-support sides of the agency help significantly in ensuring that the people, processes, and technology are well-aligned in support of the agency’s mission. Officials at MIT echoed the crucial importance of the working relationship between the Executive Vice President, who serves in a COO/CMO-type position and leads the mission-support offices of the university, and the Provost, who oversees the academic offices. MIT officials pointed out, for instance, that both university executives work closely together on formulating an organizational budget to help ensure the most effective use of resources. An MIT official reiterated the comments of colleagues in stating that the relationship between the COO/CMO and Provost of the university is paramount to ensuring the effective integration of the academic and administrative sides of the university. The official added that over the years there have been differences in the working styles of the individuals in the Executive Vice President and Provost positions, but these relationships were still effective. Many forum participants confirmed the view that good executive-level working relationships are crucial for carrying out the COO/CMO position. While the position of COO/CMO can be a critical means for integrating and transforming business and management functions, other structures and processes need to be in place to support the COO/CMO in management integration and business transformation efforts across the organization. These structures and processes can include governance boards, business transformation offices, senior executive committees, functional councils for areas such as human capital and information technology, and short- term or temporary cross-functional teams, such as a project task force— all of which would be actively involved in planning, budgeting, monitoring, information sharing, or decision making. To bring focus and direction and help enforce decisions in the agency, the COO/CMO should be a key player in actively leading or supporting these integration structures and processes. We have previously reported that dedicating an implementation team to manage a transformation process is a key practice of successful mergers and organizational transformations. Because the transformation process is a massive undertaking, the implementation team must have a “cadre of champions” to ensure that changes are thoroughly implemented and sustained over time. Establishing networks, including a senior executive council, functional teams, or crosscutting teams, can help the implementation team conduct the day-to-day activities of the merger or transformation and help ensure that efforts are coordinated and integrated. To be most effective, establishing clearly defined roles and responsibilities within this network assigns accountability for parts of the implementation process, helps in reaching agreement on work priorities, and builds a code of conduct that will help all teams to work effectively. Our work on business transformation initiatives at DOD and DHS and at DHS’s U.S. Citizenship and Immigration Services shows that these agencies have used various governance and leadership processes and structures to help modernize, transform, and integrate the business side of their organizations. For example, each organization established a business transformation office or agency to provide a dedicated team to implement its transformation, although DHS subsequently eliminated its office. At the four organizations included in our case-study reviews, the COO/CMO position is a key player in integrating and coordinating mission- related programs and mission-support functions at the senior levels of the organization. Still, in addition to the important integration and transformation role of the COO/CMO, other structures and processes need to be in place. These approaches include structures and processes for coordinating mission and mission-support functions at the senior levels of the organization, as shown below. With its organizational realignment in 2003, IRS established a Strategy and Resources Committee to govern IRS strategy and ensure that resource allocations are appropriate for meeting mission needs. As the COO/CMO of IRS, the Deputy Commissioner for Operations Support chaired this committee, which included seven other senior IRS officials, including the Deputy Commissioner for Services and Enforcement, the CFO, and the CIO. Responsibilities of the committee, which met every other month, included overseeing the agency’s strategic planning process and improvement initiatives, reviewing budget initiatives for alignment with the agency’s strategic plan, and reviewing the agency’s progress against critical performance measures. More recently, according to IRS officials, as the organization structure and the COO/CMO position matured and the need for more frequent exchanges of information grew, the strategy and resources committee has evolved into a monthly senior executive team meeting that deals with strategy and resource issues as well as other topics related to resource allocation and business planning. At Treasury, the bureau head meetings and the Executive Planning Board are two mechanisms that the COO/CMO uses to integrate and coordinate management functions across the department. The heads of the Treasury bureaus meet regularly as a group to serve as an authorizing body for carrying out the department’s mission responsibilities. According to Treasury officials, the COO/CMO has used these monthly meetings as a mechanism for discussing management issues with the various bureaus and trying to create a shared approach to improving integration of the department’s management functions. Furthermore, Treasury’s Executive Planning Board leads the department’s annual budget and strategic planning process. As chair of the Executive Planning Board, the COO/CMO at Treasury provides executive oversight of the planning process, helping to identify trends and leverage opportunities for coordination and integration across the department. As the COO/CMO at Justice, the Assistant Attorney General for Administration has a standing role at a monthly meeting of Justice component heads to advise them on matters related to management issues in the department, such as the status of the department’s budget and new management requirements, as well as to hear component heads’ concerns and ideas for addressing management issues. The Justice COO/CMO also chairs a monthly meeting of the departmental components’ executive officers (or their equivalents), who are generally career staff responsible for each component’s management functions (budget, finance, procurement, facilities, information management, and human resources). According to Justice officials, this monthly meeting serves as a forum for addressing governmentwide and departmentwide management policy and operational matters, and these meetings help to ensure that management issues are appropriately addressed at the component level within the department. MIT has also relied heavily on committees to integrate management functions across the university. As the COO/CMO of MIT, the Executive Vice President participates in a weekly “foursome meeting” with the university President, Provost, and Vice President for Institute Affairs to discuss strategic issues for the organization. The COO/CMO is also a member of the university’s Academic Council, a group of about 20 senior- level university officials involved in the overall administration of the university who meet weekly to confer on matters of organizational policy and to advise the university President. According to an MIT official, if decisions on issues cannot be reached at lower committee levels within the university, such issues can be brought before the Academic Council for resolution. Another common mechanism for integrating and coordinating management functions across the organization is the use of standing committees and subcommittees that deal with specific issues and topics related to various functions, such as a “human capital council” and its subcommittees. The COO/CMO is usually directly involved or provides important institutional support for these governance structures and their related processes, as shown below. With its organizational realignment in 2003, IRS established a Human Capital Board composed of representatives from across the agency’s major units to obtain input and plan and monitor human capital initiatives and programs. The Human Capital Board, one of IRS’s governance boards, is headed by IRS’s Human Capital Officer and includes the Chief of Staff and the head of equal employment opportunity. The board governs IRS- wide human capital policy and plans workforce strategy and initiatives. MIT established a human resources council, called HR Partners, composed of various staff from across the university with human resources responsibilities. MIT’s council organizes formal training events for the human resources staff using the expertise and resources of the university’s business school as well as informal events, such as “lunch and learn” sessions to share information related to human capital management. Treasury’s CFO Council, which meets monthly, comprises the chief financial management officers of the department’s bureaus and major offices. The COO/CMO at Treasury serves as the department’s CFO and chairs the department’s CFO Council with the deputy CFO. Treasury’s CFO Council carries out its role through various working groups, which convene for recurring events, such as the preparation of the department’s financial statements and annual reporting on internal control issues. Justice’s CIO Council, composed of department and component CIOs, deals with all matters of departmentwide significance related to information technology policy and implementation. The Justice COO/CMO is responsible for supervising the overall direction of the CIO Council in formulating department policies, standards, and procedures for information systems and reviewing and approving contracts for information processing led by the department. A specific set of job qualifications for the COO/CMO position would aid in ensuring that the incumbent has the necessary knowledge and experience in the areas within the job’s portfolio. Our interviews at the four organizations and our prior work revealed that essential qualifications for a COO/CMO position include having broad management experience and a proven track record of making decisions in complex settings as well as having direct experience in, or solid knowledge of, the respective organization. To further clarify expectations and reinforce accountability, a clearly defined performance agreement with measurable organizational and individual goals would be warranted as well. As underscored in our interviews and the forum discussion, any performance agreement for the COO/CMO should contain realistic expectations as well as appropriate incentives and rewards for outstanding performance and consequences for those who do not perform. We have previously proposed that a specific set of job qualifications for the COO/CMO position could aid in ensuring that the officeholder has the necessary knowledge and experience in the areas within the job’s portfolio. We have suggested that the individual serving in a COO/CMO position be selected based on (1) demonstrated leadership skills in managing large and complex organizations and (2) experience achieving results in strategic planning, financial management, communications and information resources management, human capital strategy, acquisition management, and change management. We have also previously suggested that Congress consider formalizing the broad qualifications for any COO/CMO positions established in federal departments and agencies. By articulating qualification requirements directly in statute, Congress would be taking an important step toward further ensuring that high-quality individuals would be selected. As a point of comparison, Congress has set out qualifications for other management positions established in various federal agencies. For example, under statute, the Deputy Architect of the Capitol/COO is to have strong leadership skills and demonstrated ability in management in such areas as strategic planning, performance management, worker safety, customer satisfaction, and service quality. The COO of Federal Student Aid is to have demonstrated management ability and expertise in information technology, including experience with financial systems. The COO of the Air Traffic Organization is to have a demonstrated ability in management and knowledge of or experience in aviation. Additionally, the Commissioner for Patents and the Commissioner for Trademarks are to have demonstrated management ability and professional background and experience in patent law and trademark law, respectively. Congress has also established overall job qualifications for two of the management functional chief positions in the federal government—the CFOs and the CIOs. The CFOs are to “possess demonstrated ability in general management of, and knowledge of and extensive practical experience in, financial management practices in large governmental or business entities.” The CIOs are to be selected with special attention to relevant experience and professional qualifications related to records management, information dissemination, security, and technology management, among other areas. As with other Senior Executive Service (SES) appointments, the qualifications for the two federal career COO/CMO positions (Justice and IRS) required general management skills and characteristics reflected in the five executive core qualifications adopted by the Office of Personnel Management (OPM), namely leading change, leading people, results driven, business acumen, and building coalitions. In addition, SES positions can have technical and professional qualifications that are specific to each position. For example, according to the most recent job vacancy announcement for the Assistant Attorney General for Administration position at Justice, the COO/CMO is to have, among other things, experience in the management of a large and complex organization with diverse personnel as well as the demonstrated ability to direct the planning, implementation, integration, and evaluation of budget and management of major administration programs in a cabinet-level department. According to the position specification for the Executive Vice President at MIT, the qualifications of the COO/CMO position included senior financial and operational leadership experience in a large, complex organization with a reputation for world-class financial and administrative management; successful experience in leading change in a large, complex organization; and an understanding of the culture of an academic institution. The case-study officials and forum participants identified a range of recommended qualifications for a COO/CMO-type position in federal departments and agencies. For example, officials at each of the four organizations told us that communication and collaboration skills are critical for the COO/CMO role and that an essential qualification for a COO/CMO position is having broad management experience in making decisions in complex settings. Some of the officials we interviewed said that having both private and public sector experience would be valuable. An agency official said that public and private sector experience are both useful in serving in a COO/CMO position in that career federal employees tend to strive for long-lasting improvements, while individuals from the private sector often have a fresh perspective on addressing challenges within the agency. Another department official cautioned that if both private and public sector experience were required qualifications for a COO/CMO position, the agency would likely be disqualifying some individuals who could effectively carry out the position. In addition, some officials noted that having prior federal experience is beneficial because of the myriad of federal regulations governing human capital, financial management, information resources management, acquisition management, and other management functions. In addition, some interviewees told us that it was not necessary to have extensive experience with each key management function but having broad knowledge of at least one of them would be helpful for a COO/CMO. Several forum participants stated that the COO/CMO should have experience in managing large organizations and in successfully leading large-scale change efforts. The case-study officials and forum participants also identified a number of pros and cons for formalizing the qualifications of a COO/CMO position in federal statute. Some advantages to placing qualifications in statute included better ensuring that the agency brought on someone who had the knowledge, skills, and experience to effectively carry out the position and helping to ensure transparency in the hiring process. For example, a forum participant, referring to the job qualifications for the CFO positions as spelled out in the CFO Act, said that over time the individuals selected for CFO positions increasingly matched the statutory job qualifications. Disadvantages of formalized qualifications included unnecessarily screening out talented persons who could effectively carry out the position and overlooking that the job could change over time depending on the needs of the agency and the focus and talent of other senior agency managers. For example, a department official said that formalizing specific qualifications in statute does not provide enough flexibility in hiring the right person for the job, and another official added that the head of the agency should determine the qualifications needed for the COO/CMO position based on the strengths and weaknesses of current senior leaders and the overall needs of the agency. Nonetheless, many interviewees told us that if placed in statute, any qualifications for the COO/CMO position should be general enough to provide flexibility in selecting an individual who best matches the current needs of the organization. Another potentially important accountability mechanism to support the COO/CMO role is to use clearly defined, results-oriented performance agreements accompanied by appropriate incentives, rewards, and other consequences. We have reported on a number of benefits of performance agreements. Specifically, performance agreements can strengthen alignment of results-oriented goals with daily operations, foster collaboration across organizational boundaries, enhance opportunities to discuss and routinely use performance information to make program improvements, provide a results-oriented basis for individual accountability, and maintain continuity of program goals during leadership transitions. While performance agreements can be implemented administratively, Congress has also required performance agreements in statute as well as provided for performance assessments with consequences. For example, Congress has required the COO at the Department of Education’s Office of Federal Student Aid and the Secretary of Education to enter into an annual performance agreement with measurable organizational and individual goals that the COO is accountable for achieving. Further, the COO’s progress in meeting these goals is to form the basis of a possible performance bonus of up to 50 percent of base pay, as well as any decisions by the Secretary to remove or reappoint him or her. Similarly, Congress made it clear in statute that the Deputy Architect of the Capitol/COO may be removed from office by the Architect of the Capitol for failure to meet performance goals. Top civil servants in other countries—such as New Zealand, Canada, and the United Kingdom—also have performance agreements. Of the four organizations included in our study, two of the COO/CMOs— both career civil servants—had performance agreements, and two did not. The two performance agreements included a listing of overall objectives and commitments for each position along with general benchmarks and standards to be used in assessing the COO/CMO’s performance. For example, one of the commitments listed in the IRS COO/CMO’s performance agreement for fiscal year 2006 was to “drive processes to increase IRS security preparedness,” which would be measured, in part, by an improved score for security under the PMA. The IRS COO/CMO’s performance agreement also called for building strong alliances and gaining cooperation to achieve mutually satisfying solutions as well as acting to continuously improve products and services in the effort to meet overall performance commitments. At Justice, the COO/CMO’s performance work plan, with the elements and objectives that compose it, serves as a performance agreement for the Assistant Attorney General for Administration. The objectives listed in the Justice COO/CMO’s performance work plan for fiscal year 2007 also included direct references to managing and implementing the department’s approved plan to improve organizational performance, as outlined in the PMA. Many of the case-study officials and forum participants told us that performance agreements can help to ensure accountability for the COO/CMO position in setting out clear requirements and specific objectives. For example, an agency official commented that performance agreements have been effective in setting the stage for improved performance in his agency. A department official added that the performance agreement should have broad objectives and specific accomplishments that are well-documented in order to hold the COO/CMO accountable. Still other officials stressed that the COO/CMO’s performance objectives should be directly linked to an agency’s strategic plan. The officials we interviewed generally agreed that any performance agreement should have a removal clause in the event that the COO/CMO does not perform well. Officials also generally agreed that any performance agreement should have incentives, such as a bonus, for meeting or exceeding expectations as spelled out in the agreement. Given that organizational results and transformational efforts can take years to achieve, agencies need to take steps to ensure leadership continuity in the COO/CMO position. Foremost, an agency needs to have an executive succession and transition planning strategy that ensures a sustained commitment and continuity of leadership as individual leaders arrive or depart or serve in acting capacities. For example, in creating a CMO position for DHS, Congress has required the DHS CMO to develop a transition and succession plan to guide the transition of management functions with a new administration. The administration and Congress could also consider options of other possible mechanisms to help agencies in maintaining leadership continuity for the position. For example, the benefits of a term appointment for the position, such as instilling a long- term focus, need to be weighed along with the potential challenges of a term appointment, such as a lack of rapport between members of a new senior leadership team with any change in administration. Moreover, as emphasized in our interviews and the forum discussion, career appointments for the COO/CMO have advantages that should be fully assessed when considering the position’s roles, responsibilities, and reporting relationships. The establishment of a term appointment is one mechanism that should be considered for providing continuity to the COO/CMO position. We have previously endorsed setting a term appointment for the COO/CMO position because it would help provide the continuing focused attention essential to successfully completing multiyear transformations. Large- scale change initiatives and organizational transformations typically require long-term, concerted effort, often taking years to complete and extending beyond the tenure of many political leaders. Providing a COO/CMO with a term appointment of about 5 to 7 years would be one way to institutionalize accountability over the extended periods needed to help ensure that long-term management and transformation initiatives provide meaningful and sustainable results. Statutory term appointments currently exist for various senior-level positions in a number of agencies, bureaus, commissions, and boards in the federal government. As described in table 1, the lengths of such terms can range from 3 to 15 years. The methods of appointment for these term positions vary as well, including appointment by (1) the President with the advice and consent of the Senate, (2) the secretary of a cabinet-level department, or (3) an agency head with the approval of an oversight committee. Government agencies in the United Kingdom, New Zealand, and Ireland also have COO-type positions in place with term appointments of 5 to 7 years. Term appointments for senior positions in federal agencies have been established in a number of cases primarily to promote and enhance continuity and independence. For example, during congressional deliberations on the Civil Service Reform Act of 1978, which established OPM, conferees agreed that the OPM Director should have a 4-year term but declined the requirement that the term coincide with the President’s so as to afford the Director with a measure of independence in performing his or her duties. During congressional deliberations in 1994 to establish the Social Security Administration as an independent agency, creating a 6- year term for the agency’s Commissioner was viewed as one key feature to insulate the position from short-term political pressures and provide increased stability in the management of the agency. In testimony leading up to the 1998 restructuring of IRS, the explanations for establishing a 5- year term for the Commissioner of Internal Revenue chiefly centered on the goal of providing continuity in the position. At the four case-study organizations, none of the COO/CMOs were appointed or selected for their positions under a term appointment at the time of our review. As the COO/CMO at Treasury, the Assistant Secretary for Management was a noncareer position serving at the will of the President. As the COO/CMOs at Justice and IRS, respectively, the Assistant Attorney General for Administration and the Deputy Commissioner for Operations Support were both career SES positions without designated terms. As the COO/CMO at MIT, the Executive Vice President served MIT’s President and the university’s board of trustees and held the position without any predetermined length of service. The case-study officials and forum participants agreed with the need to ensure leadership continuity in the COO/CMO position, but there were mixed views as to whether a term appointment would be a strong mechanism for ensuring continuity in a COO/CMO position. Advantages of a term appointment included fostering accountability for the incumbent and the long-term consequences of his or her decisions, signaling to others in the agency that the incumbent will likely be in the position for the long term, and protecting the incumbent from undue political influence. Some case-study officials said that term appointments could potentially be a vehicle for promoting and enhancing continuity of leadership in the agency, assuming that the length of the term was sufficient to help ensure that long-term management and transformation initiatives are successfully completed. A forum participant said that changes in leadership at an agency would not pose a problem as long as the goals and milestones were clear and the definition of success was the same regardless of the leadership. Limitations of a term appointment included the need to develop new working relationships with a different leadership team when an administration changes as well as the fact that incumbents can readily leave the position prior to the end of any designated term. A number of forum participants expressed a strong concern that the agency head should have the ability to select the agency’s leadership team, especially given that personal relationships and rapport are so important. For example, a participant said that an individual who is “inherited” in the COO/CMO position by another Secretary can be easily marginalized. Some forum participants had concerns that longer terms, such as 7 years, would deter individuals from applying for COO/CMO positions. During the forum, however, the Comptroller General pointed out that many individuals would accept a COO/CMO position out of a desire to serve, regardless of term. Another option for promoting the continuity of leadership in the COO/CMO position is the use of career appointments. As we have previously reported, high turnover among politically appointed leaders in federal agencies can make it difficult to follow through with organizational transformation because the length of time often needed to provide meaningful and sustainable results can easily outlast the tenures of top political appointees. In previous testimony, we have suggested that the individual serving in a COO/CMO position be selected without regard to political affiliation. At the time of our review, the individuals in the COO/CMO positions at the three federal case-study agencies served under varying types of appointments, including both career and noncareer. As the COO/CMO at Treasury, the Assistant Secretary for Management was a presidentially appointed, Senate-confirmed position. As the COO/CMO at Justice, the Assistant Attorney General for Administration was a career SES position, but appointment to the position is subject to the approval of the President but not subject to Senate confirmation. As the COO/CMO at IRS, the Deputy Commissioner for Operations Support was also a career SES position. The case-study officials and forum participants offered a range of insights, views, and examples from their experiences regarding the issue of promoting continuity in the COO/CMO position by using career appointments. Several officials we interviewed at the case-study organizations told us that career appointments for COO/CMO-type positions in federal departments and agencies would provide a number of benefits over political appointments. These interviewees said that career SES personnel are more likely to help ensure continuity in the position, are generally more familiar with federal management issues, and can be easily reassigned to another position if they are not effective in the COO/CMO role. A department official also told us that another advantage in serving as a career COO/CMO is the degree of independence that can be brought to important decisions under consideration at an agency. Some forum participants agreed that career senior executives were the best option for filling COO/CMO positions because career executives could offer continuity and experience as administrations come and go. A participant remarked that because political appointees currently fill many of the executive-level administrative management positions that in the past were filled by career executives, a loss of continuity and experience has resulted. Some agency officials and forum participants raised concerns about filling COO/CMO positions with career civil servants. The challenges cited include the view that there might not be enough qualified career applicants for these positions and restrictions on the administration’s ability to select individuals for these positions. For example, one forum participant said that the President and Secretary should have latitude in determining who fills the COO/CMO position because the relationship is crucial. There was also discussion on whether the senior management official in an agency should be a presidential appointment requiring Senate confirmation, while Senate confirmation would not be required of those officials who lead specific management functions (for example, financial management, information technology, or human capital) and who report to that senior management official. Forum participants differed in their views on the appropriate appointment type for the COO/CMO. During the forum, the Comptroller General suggested that some COO/CMO positions could be presidential appointments with Senate confirmation and others could be appointments without Senate confirmation. Given the long-standing management challenges faced by many government agencies, as well as the organizational transformation now taking place across government in response to a post-9/11 environment and other changes, new leadership models are needed to help elevate, integrate, and institutionalize business transformation and management reform efforts. A COO/CMO, given adequate authority, responsibility, and accountability, could provide the leadership necessary to sustain organizational change over a long term. While they may share a number of common circumstances, each department and agency in the federal government nevertheless faces its own unique set of characteristics and challenges in attempting to improve and transform its business operations. Yet as we learned from our case study and forum discussion, a number of common criteria can be used to determine the type of COO/CMO that would be appropriate in a federal agency. Once such a determination is made, a number of common strategies can be adopted to put such a position into place and to help ensure that it will work effectively. The strategies underscore the importance of clearly identified roles and responsibilities, good working relationships, inclusive decision-making structures and processes, and solid accountability mechanisms. As Congress considers COO/CMO positions at selected federal agencies, the criteria and strategies we identified should help to highlight key issues that need to be considered, both in design of the positions and in implementation. While Congress is currently focused on two of the most challenging agencies—DOD and DHS—the problems they face are, to varying degrees, shared by the rest of the federal government. Each agency, therefore, could consider the type of COO/CMO that would be appropriate for its organization and adopt the strategies we outline to implement such a position. Because it is composed of the senior management officials in each department and agency, the President’s Management Council, working closely with OMB, could play a valuable role in leading such as assessment and helping to ensure that due consideration is given to how each agency can improve its leadership structure for management. Moreover, given the council’s charter to oversee government management reforms, it can help institutionalize a leadership position that will be essential to overseeing current and future reform efforts. To address business transformation and management challenges facing federal agencies, we recommend that the Director of OMB work with the President’s Management Council to (1) use the criteria that we have developed for determining the type of COO/CMO positions that ought to be established in the federal agencies that are members of the council and (2) once the types of COO/CMOs have been determined, use the key strategies we have identified in implementing these positions. Congress should consider the criteria and strategies for establishing and implementing COO/CMO positions as it develops and reviews legislative proposals aimed at addressing business transformation and management challenges facing federal agencies. In doing so, the implementation of any approach should be determined within the context of the specific facts and circumstances that relate to each individual agency. We provided a draft of this report for review and comment to the Director of OMB. The Associate Director for OMB Administration and Government Performance told us that OMB had no comments on the draft report. We also provided a draft of this report to the Secretary of the Treasury, the Acting Commissioner of Internal Revenue, the Acting Attorney General, the Executive Vice President of MIT, and the participants of the April 2007 forum for their review and technical comments. Treasury, IRS, and several forum participants provided us with technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 45 days from its date. At that time, we will send copies of this report to the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs, the Chairman and Ranking Member of the House Committee on Oversight and Government Reform, and other interested congressional parties. We will also send copies to the Director of OMB, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Attorney General, and the President of MIT. In addition, we will make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me on (202) 512-6806 or steinhardtb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives for this study were to identify criteria that can be used to determine the type of chief operating officer (COO)/chief management officer (CMO) or similar position that ought to be established in federal agencies and strategies for implementing COO/CMO positions to elevate, integrate, and institutionalize key management functions and business transformation efforts in federal agencies. To identify these criteria and strategies, we (1) gathered information on the experiences and views of officials at four organizations with COO/CMO-type positions and (2) convened a Comptroller General’s forum to gather insights from individuals with experience and expertise in business transformation, federal and private sector management, and change management. To select the organizations to include in our study, we collected and reviewed literature on general management integration approaches and organizational structures of public and private sector organizations, and we reviewed our prior work on the COO/CMO concept as well as organizational transformation issues at the Department of Defense (DOD) and the Department of Homeland Security (DHS). We also collected and analyzed organizational charts of the 24 Chief Financial Officers Act federal agencies as well as those federal agencies required to report under the President’s Management Agenda. We also consulted with various nonprofit organizations with experience in federal and state/local government. We sought to identify organizations that had senior-level officials with responsibility for integrating key management functions, including, at a minimum, human capital, financial management, information technology, and acquisition management, and who generally did not have direct responsibility for the mission programs and policies of their organizations. We considered a range of diverse missions and also took into account that the COO/CMOs of the organizations were appointed to their positions under varying methods. Our organization selection process was not designed to identify examples that could be considered representative of all COO/CMO-type positions. The four organizations included in our review are three federal agencies and one nonprofit organization: the Department of the Treasury (Treasury), the Internal Revenue Service (IRS), the Department of Justice (Justice), and the Massachusetts Institute of Technology (MIT). At the headquarters of these four organizations, we interviewed senior officials and we collected and reviewed documents related to the COO/CMO position. We conducted semistructured interviews with the individuals serving in the COO/CMO positions (Acting Assistant Secretary for Management at Treasury, Deputy Commissioner for Operations Support at IRS, Assistant Attorney General for Administration at Justice, and Executive Vice President at MIT) as well as those managers who reported directly to the COO/CMOs. These interviews focused on how the COO/CMO position functioned in their respective organizations as well as the officials’ views and insights on issues such as roles and responsibilities, reporting relationships, accountability mechanisms, and decision-making structures and processes. In carrying out our work at these four organizations, we did not assess the effectiveness of each COO/CMO serving in these respective organizations nor did we determine whether any specific COO/CMO position directly resulted in a higher level of organizational performance. Rather, we attempted to highlight the experiences and views of officials in carrying out the COO/CMO position. The Comptroller General also hosted a forum on April 24, 2007, to bring together former and current government executives and officials from private business and nonprofit organizations to discuss when and how a COO/CMO or similar position might effectively provide the continuing, focused attention essential for integrating key management functions and undertaking multiyear organizational transformation initiatives. This forum was designed for the participants to discuss these issues openly and without individual attribution. Participants were selected for their expertise but also to represent a variety of perspectives. Prior to the forum, we provided each of the participants with a briefing paper that included background information on the four case-study organizations, some preliminary results of our initial work on these case-study reviews, as well as key statements from our prior work related to the COO/CMO concept. The highlights summarized in this report do not necessarily represent the views of any individual participant or the organizations that these participants represent. We also interviewed officials from the Office of Management and Budget to discuss the establishment and implementation of COO/CMO positions in federal departments and agencies. We conducted our review from August 2006 through July 2007 in accordance with generally accepted government auditing standards. Executive Director, IBM Center for The Business of Government James A. Champy Chairman of Consulting, Perot Systems Corporation (Massachusetts Institute of Technology Board Member) Chief Operating Officer, U.S. Government Accountability Office President and Chief Executive Officer, National Academy of Public Administration Deputy Secretary, U.S. Department of Defense Commissioner, U.S. Internal Revenue Service Executive Director, American Society of Military Comptrollers (Former Assistant Secretary of the Air Force - Financial Management and Comptroller) President and Chief Executive Officer, Center for Strategic and International Studies (Former Deputy Secretary of Defense) Chief Administrative Officer/Chief Financial Officer, U.S. Government Accountability Office President and Chief Executive Officer, Nevada New-Tech, Inc. (Former Chief Executive Officer, U.S. Government Printing Office) Deputy Director of Management, U.S. Office of Management and Budget Managing Partner, McKinsey & Company President, U.S. Soccer Foundation (Former Deputy Mayor, Government of the District of Columbia) Chief Financial Officer, U.S. Department of Labor Chancellor, Louisiana State University and A & M College (Former Administrator, National Aeronautics and Space Administration) Develop and maintain an integrated agency accounting and financial management system, including financial reporting and internal controls, which complies with applicable accounting principles and standards and provides for complete, reliable, consistent, and timely information; the development and reporting of cost information; the integration of accounting and budgeting information; and the systematic measurement of performance. Direct, manage, and provide policy guidance and oversight of agency financial management personnel, activities, and operations, including the development of agency financial management budgets and recruitment, selection, and training of personnel to carry out agency financial management functions. Approve and manage financial management systems design and enhancements projects and the implementation of agency asset management systems. Paperwork Reduction Act of 1995 (Pub. L. No. 104- 13); Clinger-Cohen Act of 1996 (Pub. L. No. 104- 106), as renamed pursuant to Pub. L. No. 104-208 (Sept. 30, 1996) Carry out information resources management responsibilities of the agency, including information collection and control of paperwork, information dissemination, statistical policy and coordination, records management, privacy and information security, and information technology. Provide advice and other assistance to the head of the agency and other senior management personnel to ensure that technology is acquired and information is managed consistent with the applicable law and priorities established by the head of the agency. Develop, maintain, and facilitate the implementation of a sound, secure, and integrated information technology architecture for the agency. Promote the effective and efficient design and operation of all information resources management processes for the agency. Monitor and evaluate the performance of the agency’s information technology programs, and advise the head of the agency whether to continue, modify, or terminate a program. Annually, as part of the strategic planning and performance evaluation process, assess the requirements established for information resources management knowledge and skills of agency personnel; assess the extent to which the positions and personnel at both the executive and management levels meet those requirements; develop strategies and specific plans for hiring, training, and professional development to rectify any deficiency in meeting those requirements; and report to the head of the agency on the progress made in improving information resources management capability. Set the workforce development strategy of the agency. (Pub. L. No. 107-296, Title XIII) Assess workforce characteristics and future needs based on the agency’s mission and strategic plan. Align the agency’s human resource policies and programs with the agency’s mission, strategic goals, and performance outcome. Develop and advocate a culture of continuous learning to attract and retain employees with superior abilities. Identify best practices and benchmarking studies. Apply methods for measuring intellectual capital and identify links of that capital to organizational performance and growth. Advise and assist the head of the agency and other agency officials to ensure (Pub. L. No. 108-136, Title XIV) that the agency’s mission is achieved through the management of the agency’s acquisition activities. Monitor and evaluate the performance of the agency’s acquisition activities and programs, and advise the agency head on the appropriate business strategy to achieve the mission of the agency. Increase the use of full and open competition in the acquisition of property and services by establishing policies, procedures, and practices that ensure that the agency receives a sufficient number of sealed bids or competitive proposals from responsible sources at the lowest cost or best value. Increase appropriate use of performance-based contracting and performance specifications. Make acquisition decisions consistent with all applicable laws and establish clear lines of authority, accountability, and responsibility for acquisitions. Manage the direction of acquisition policy and implement the agency-specific acquisition policies, regulations, and standards. Develop and maintain an acquisition career management program to ensure an adequate professional workforce. As part of the strategic planning and performance evaluation process, assess the knowledge and skill requirements established for agency personnel and their adequacy for facilitating the achievement of the performance goals for acquisition management; develop strategies and specific plans for hiring, training, and professional development to rectify any deficiencies in meeting such requirements; and report to the agency head on the progress made in improving acquisition management capability. In addition to the contact named above, Sarah Veale, Assistant Director; Charlesetta Bailey; Martene Bryan; K. Scott Derrick; and Karin Fangman made major contributions to this report. Others who made important contributions include Carolyn Samuels and Jay Smale. Defense Business Transformation: Achieving Success Requires a Chief Management Officer to Provide Focus and Sustained Leadership. GAO- 07-1072. Washington, D.C.: September 5, 2007. Assesses the progress DOD has made in setting up a management framework for overall business transformation efforts and the challenges DOD faces in maintaining and ensuring success of those efforts, including the need for a CMO. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-452T. Washington, D.C.: February 7, 2007. Discusses the numerous management challenges at DHS, including the transformation of the department. Suggests various solutions to enhance overall transformation efforts. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Reports on government programs and operations that are considered high risk. Suggests solutions and continued oversight and action by Congress. Defense Business Transformation: A Comprehensive Plan, Integrated Efforts, and Sustained Leadership Are Needed to Assure Success. GAO- 07-229T. Washington, D.C.: November 16, 2006. Discusses DOD’s efforts to develop an enterprisewide business transformation plan and compliance with legislation that addresses business systems modernization. Suggests a COO/CMO position as a solution to improve business transformation. Department of Defense: Sustained Leadership Is Critical to Effective Financial and Business Management Transformation. GAO-06-1006T. Washington, D.C.: August 3, 2006. Discusses DOD financial and business management challenges. Suggests actions needed to enhance business and financial transformation efforts. 21st Century Challenges: Transforming Government to Meet Current and Emerging Challenges. GAO-05-830T. Washington, D.C.: July 13, 2005. Discusses long-term fiscal challenges and other significant trends and challenges facing the federal government. Suggests ways federal agencies can transform into high-performing organizations. Department of Homeland Security: A Comprehensive and Sustained Approach Needed to Achieve Management Integration. GAO-05-139. Washington, D.C.: March 16, 2005. Examines DHS’s management integration efforts. Recommends actions to be taken by the Secretary of Homeland Security and Congress. Chief Operating Officer Concept and its Potential Use as a Strategy to Improve Management at the Department of Homeland Security. GAO-04- 876R. Washington, D.C.: June 28, 2004. Discusses the management and organizational transformation challenges at DHS. Describes how a COO can be a tool to address DHS’s challenges. Comptroller General’s Forum: High-Performing Organizations: Metrics, Means, and Mechanisms for Achieving High Performance in the 21st Century Public Management Environment. GAO-04-343SP. Washington, D.C.: February 13, 2004. Summarizes the findings of a GAO forum held in November 2003 on high- performing organizations. Discusses the key characteristics and capabilities of high-performing organizations. Results-Oriented Government: Shaping the Government to Meet 21st Century Challenges. GAO-03-1168T. Washington, D.C.: September 17, 2003. Describes significant performance and management problems facing the federal government and the importance of periodic reexamination and reevaluation of agencies’ activities. Suggests a range of options that Congress could use to eliminate redundancy and improve operations. Highlights of a GAO Roundtable: The Chief Operating Officer Concept: A Potential Strategy to Address Federal Governance Challenges. GAO-03- 192SP. Washington, D.C.: October 4, 2002. Summarizes the findings of a GAO roundtable held in September 2002 on the COO concept and how it might be used in selected federal agencies as one strategy to address certain systemic governance and management challenges. Managing for Results: Using Strategic Human Capital Management to Drive Transformational Change. GAO-02-940T. Washington, D.C.: July 15, 2002. Discusses the importance of human capital. Suggests actions that the federal government needs to take in order to reform human capital.
Agencies across the federal government are embarking on large-scale organizational transformations to address 21st century challenges. One proposed approach to address systemic federal governance and management challenges involves the creation of a senior-level position--a chief operating officer (COO)/chief management officer (CMO)--in selected federal agencies to help elevate, integrate, and institutionalize responsibility for key management functions and business transformation efforts. GAO was asked to develop criteria and strategies for establishing and implementing COO/CMO positions in federal agencies. To do so, GAO (1) gathered information on the experiences and views of officials at four organizations with COO/CMO-type positions and (2) convened a forum to gather insights from individuals with experience in business transformation. A number of criteria can be used to determine the appropriate type of COO/CMO position in a federal agency. These criteria include the history of organizational performance, degree of organizational change needed, nature and complexity of mission, organizational size and structure, and current leadership talent and focus. Depending on these five criteria, there could be several types of COO/CMO positions, including: (1) the existing deputy position could carry out the integration and business transformation role--this type of COO/CMO might be appropriate in a relatively stable or small organization; (2) a senior-level executive who reports to the deputy, such as a principal under secretary for management, could be designated to integrate key management functions and lead business transformation efforts in the agency--this type of COO/CMO might be appropriate for a larger organization; and (3) a second deputy position could be created to bring strong focus to the integration and business transformation of the agency--this might be the most appropriate type of COO/CMO for a large and complex organization undergoing a significant transformation to reform long-standing management problems. Because each agency has its own set of characteristics, challenges and opportunities, the implementation of any approach should be determined within the context of the agency's specific facts and circumstances. Once the type of COO/CMO is selected, six key strategies can be useful in implementing such positions in federal agencies.
The C-130 Hercules aircraft is a medium-range, tactical airlift aircraft designed primarily for transporting personnel and cargo. The aircraft was originally flown in 1954 and has been under continuous production ever since. The Air Force currently has approximately 700 C-130s of various configurations in its current C-130E and H fleet. The average age of the active duty C-130 fleet is over 25 years old, while the average age of the Guard and Reserve C-130s is about 15 years old. These aircraft are under the management and control of the Air Mobility Command (AMC) and are operated by the active Air Force, the Air National Guard, and the Air Force Reserve. The Air Force has just begun buying a new J model C-130. Lockheed Martin Corporation is developing the J aircraft as a commercial venture and expects it to (1) lower the cost of ownership of the fleet and (2) climb higher and faster, fly at higher cruise speeds, and take off and land in a shorter distance than the existing fleet. The J will have the same structural characteristics as previous C-130 models; however, it differs in that it includes, among other things, an advanced integrated digital avionics system, a new engine and composite propellers, a heads-up display, and a redesigned flight station to facilitate operation by a three-man versus a five-man crew. The J can also be bought in a stretched version. The aircraft is currently undergoing developmental tests and the Federal Aviation Administration (FAA) certification process is expected to end in June 1998. See appendix I for an illustration of the C-130J aircraft, along with the contractor’s comparison of the capabilities for the C-130E, H, and J. At the time of our review, 23 Air Force C-130Js were on contract, with delivery of the first aircraft initially scheduled for December 1997. The schedule has slipped, however, and delivery of the first aircraft is now scheduled for October 1998. The schedule has been delayed due to technical problems and the pending FAA certification. The following sections provide the answers to each of your specific questions. Our scope and methodology for obtaining this information are discussed in appendix II. The current C-130 fleet is comprised of 12 different variants and the missions vary with each variant. While most of the current fleet is comprised of combat delivery aircraft, many of the C-130 variants perform specialized missions. The combat delivery C-130 fleet, designated as C-130Es and C-130Hs, is used in a wide variety of wartime and peacetime missions. In wartime, the C-130 combat delivery aircraft primarily performs the intratheater portion of the airlift mission, leaving the long-range intertheater transport mission to larger aircraft such as the C-5 and C-17. These C-130s primarily provide rapid transportation of personnel or cargo for delivery by parachute to a designated drop zone, or by landing at austere locations within the conflict area. These aircraft are also the primary aeromedical evacuation aircraft in a conflict. In peacetime, the combat delivery C-130 is used for training flights, regularly scheduled channel operations, and special assignment missions.It is also used in fire fighting and humanitarian relief missions. For example, it has been used to airlift heavy equipment into remote areas of other countries to build airports and roads, and transport local goods. In addition to the missions performed by the basic combat delivery C-130 aircraft, 11 other variants perform specialized missions. These missions include (1) weather reconnaissance, performed by the WC-130 aircraft; (2) special communication missions, performed by the EC-130 aircraft; and (3) search and rescue, performed by the HC-130 aircraft. The 12 different C-130 models that are currently in the fleet and their respective missions are summarized in table 1. Appendix III provides further details on these C-130 models. The Air Force plans to buy the C-130J as a one-for-one replacement of C-130Es and C-130Hs as they reach their service life. Air Force officials told us that the basic missions of the C-130 fleet will not change when the new C-130J aircraft enter the fleet. However, it appears that these missions will be expanded. Specifically, Air Force officials told us that, as part of the Air Force’s planned C-130J procurement, it is planning to buy the new stretched C-130J-30. We were further told that because this aircraft will provide more room/airplane capacity, it could be used to augment intertheater missions, like strategic brigade airdrops. Final decisions regarding the procurement of the C-130J-30 and the aircraft’s use, however, will not be made until fall 1998. At the time of our review, peacetime and wartime requirements for the Air National Guard and Air Force Reserve combat delivery aircraft inventory totaled 264 aircraft. Requirements for the Guard and Reserves’ C-130 combat delivery aircraft are established in the Air Force’s C-130 MSP, which was delivered to Congress in 1997. The source of the requirements for these unit’s special mission C-130s varied depending on the model. For example, we found that: The requirements for the weather reconnaissance WC-130 were set at 10 aircraft by Congress. The requirements for the ski-equipped LC-130, according to officials from the National Science Foundation (NSF) and Air National Guard, are set at 10 aircraft. These aircraft are used to conduct operations in support of military taskings and in support (deliver supplies, people, fuel, and scientific equipment) of the NSF’s polar research missions. The requirements for the psychological warfare EC-130, the search and rescue HC-130, and adverse weather special operations MC-130 emanated from theater commander in chiefs. According to Air Force officials, the specific required number of these aircraft is classified. Total combat delivery and special mission C-130 inventory for the Air Force Guard and Reserve was 352 aircraft as of January 1998. Appendix IV shows the inventory and locations for these aircraft. As of March 1998, Air Force officials stated that decisions regarding their plans for the future C-130 inventory had not been made. For the past 21 years, with the exception of five aircraft, Congress has directed the procurement of C-130s for the Air National Guard and Air Force Reserve units. According to C-130 program officials, the Air Force has not requested these aircraft because aircraft in those units have many years of service life remaining. Figure 1 shows the annual procurement of the 256 aircraft that Congress directed for the Guard and Reserve since 1978. These five aircraft were originally requested by the Air Force for active Air Force units but were subsequently scheduled to go to the Reserves at Keesler Air Force Base in Mississippi. Both the Joint Chiefs of Staff’s (JCS) June 1996 Intratheater Lift Analysisand the Air Force C-130 MSP reviewed the service’s combat delivery aircraft inventory and determined that there were more C-130s in inventory than required for military operations in Korea and Southwest Asia—the two major regional contingencies the Department of Defense (DOD) uses for force structure planning purposes. About 50 C-130 aircraft were identified in the Air Force MSP as excess over requirements. Thirty of these were in the Air National Guard and Air Force Reserve units and the remaining were in the active duty force. We were told that reductions in the active duty force structure was achieved by reclassifying some of the combat coded aircraft and designating others as ground trainers. Reductions in the Air National Guard were expected to be 24 aircraft (from 190 to 166 aircraft) and the Air Force Reserve Command units were to be reduced by 6 aircraft (from 104 to 98 aircraft). According to Air Force officials, these reductions were not made. In accordance with restrictions in the Conference Reports on the 1998 Department of Defense Appropriations Act and the National Defense Authorization Act for Fiscal Year 1998, these reductions were not taken. Specifically, the reports recommended that the Air National Guard and the Air Force Reserve C-130 aircraft remain at current levels—levels before the MSP. At the time of our review, Air Force officials told us that the Air Force was in the process of designing a plan for retiring excess C-130s. Although the Air Force has a process governing the retirement of its aircraft, it has not been able to implement the process effectively. As a result, some C-130 aircraft have been retired with substantial service life remaining and/or shortly after the aircraft had been modified. The Air Force, however, appears to be making changes to improve this process. Air Force Instruction 16-402 governs the process for retiring aircraft. The process begins with a document called the Force Structure Plan Outlook. This document tells the commands how many aircraft are excess to requirements in a given year, usually as a result of budget constraints or a change in requirements for the fleet. Once a decision has been made to excess a certain number of aircraft, the commands are to review: (1) The aircraft’s remaining service life. (2) The recent maintenance history on the aircraft. Program depot maintenance and other inspection records are reviewed, at this point, to assess whether the aircraft had a lot of corrosion problems, maintenance troubles, and/or a known history of performance problems. When a decision to excess a specific aircraft is finalized, Air Force headquarters is to determine whether other users—that is, active duty, guard and reserves, and ultimately other agencies—could use the aircraft. If other users were not identified, the aircraft should be retired. The Air Force has retired C-130s with service life remaining on the aircraft. Program officials told us that such retirements have generally been driven by congressional direction to buy more C-130s than the Air Force requested in its annual budget requests. Program officials told us that, accordingly, it was difficult to control the retirement of C-130 aircraft. They stated that, since retirement from the fleet had been based on congressionally directed acquisitions replacing existing C-130 aircraft, they were not retiring aircraft because the service life had expired. Of the 49 C-130s retired between June 1991 and May 1997, 36 were C-130Bs with old technology and 13 were newer C-130Es. Of the 13 C-130Es, all had an average of 14 years of service life still remaining. In addition, annual congressional appropriation language states that, with the exception of safety modifications, no modifications may be done if the service plans to retire an aircraft in less than 5 years after modifications. We noted that of the 49 C-130 aircraft the Air Force has retired since 1991, 40 had modifications within 5 years of retirement, totaling about $9 million. Program officials told us that it is difficult to control modifications of C-130 aircraft because the Air Force does not generally know 5 years in advance when a C-130 aircraft will leave the fleet. The Air Force appears to be taking steps to improve its C-130 modifications and retirement process. Specifically, in October 1997, the Vice Chief of Staff, in a message to the lead Air Force commands for C-130 aircraft, stated that additional C-130J congressional adds should be expected for fiscal year 1998 and beyond and that the commands should plan and program accordingly. Air Force officials have stated that they will incorporate this direction in the development of their C-130 retirement plan. In that regard, an AMC Tiger Team looking at the C-130 fleet has recommended to the Air Force Chief of Staff that 150 C-130Es with the worst service life problems be replaced with C-130J-30s. We were told, however, that final decisions were not expected on the retirement of the old C-130s and procurement of the Js until late fall 1998. Until these decisions are made and the plan released, it is too early to determine how well this directive will be implemented. As of March 1998, the Air Force had not decided how many C-130Js will be required. According to C-130 program officials, although the Air Force has a documented requirement for the C-130J as the need arises, a large-scale C-130J program is not needed at this time because the service life of the first C-130E will not expire until 2002. Accordingly, the Air Force has only been requesting one or two C-130Js per year since 1996 for the active force. As previously shown in figure 1, the remaining J acquisitions were congressionally directed buys for the Guard and Reserve. The Air Force began procuring the J in accordance with directions from the Air Force Chief of Staff to use fiscal year 1994 Guard and Reserve procurement funds to buy two C-130Js. Originally, the two J models were going to the active duty Air Force, which provided the Air National Guard two C-130Hs as a swap. These two J models will now be going to the Air Force Reserves at Keesler Air Force Base, Mississippi, following the flight test program. The justification for the new C-130J buys, according to requirements, acquisition, and budget documents, is to reduce the cost of ownership of the C-130E and H fleet, with anticipated cost savings associated with the new technology and the reduced crew and maintenance needs of the J aircraft. A review of the C-130J program office’s life-cycle cost estimate was completed in June 1996 by the Air Force Cost Analysis Improvement Group. The report stated that operations and support savings are forecast from a program of 135 C-130Js bought over the 1996 to 2014 time frame with the new technology and the reduced crew and maintenance needs of the J aircraft. Air Force officials, however, acknowledge that savings associated with this commercial buy will not be substantiated until several years after delivery/transfer of ownership is taken by the Air Force, which, as previously stated, is now expected in October 1998 for the first J aircraft. Additionally, during our review, some Air Force officials expressed concern that the normal requirements process was not followed in the recent J buys. They stated that requirement documents for the EC-130Js and WC-130Js were written after the Air Force had made a commitment to buy the aircraft. For example, Congress appropriated funds for two unrequested EC-130Js—one in fiscal year 1997 and one in fiscal year 1998. An October 7, 1997, memorandum from the Office of the Secretary of the Air Force for Acquisition noted, however, that a validated operational requirements document had not yet been generated. Additionally, these officials noted that there have been concerns that the EC-130J buy may not address all of the problems in the current EC-130 fleet—primarily, the lack of adequate space on the aircraft. There are 12 crew stations aboard the EC-130 aircraft and we were told that there is barely enough room for the broadcasting equipment needed for each station. The Air Force has looked at the wide body Boeing 757 as a replacement for the current EC-130 fleet, but has since decided to use the J. Regarding alternatives to the J, we were told that alternatives have been evaluated and rejected in the past. Specifically, in December 1996, an unsolicited proposal was submitted to modernize the C-130 fleet. Appendix V summarizes the Air Force’s reasons for rejecting this proposal. In addition to rejecting prior alternatives to the J for cost and technical reasons, Air Force officials told us that the alternatives were premature since the first C-130E is not scheduled to retire until 2002. Air Force officials also told us that the Air Force is currently considering alternatives presented by an AMC tiger team. Among other things, the goals of the AMC effort included developing an integrated plan to improve reliability and maintainability of the fleet, produce greater commonality in the fleet, and provide an overall acquisition strategy for the C-130 weapon system. After review of specific problems in the C-130 fleet, which included the inability of the fleet to meet Global Air Traffic Management requirements and structural/corrosion problems of the aging fleet, the tiger team recommended that the Air Force (1) modify 360 of the “best structural” C-130s with a block modification process that would essentially put a new front end, including a new engine and cockpit, on the older C-130 aircraftand (2) replace aircraft with the worst service life/structural/corrosion problems—about 150 in this category—with new C-130J-30s. Final decisions on both matters, however, are not expected until the fall of 1998. Air Force C-130 officials stated that funding shortfalls for the C-130 fleet have historically been a problem, primarily because Congress has added C-130 aircraft to their budget without providing the needed funding for logistics support. This support includes spare parts, training, and maintenance that is normally provided with a weapon system. These officials further stated that the Air Force was able to deal with the shortfalls in the past because a large logistics support infrastructure was in place for the C-130E and C-130H models, which helped them to absorb the shortfalls. However, they noted that because the C-130J is so different from those prior models, the majority of the support in the current infrastructure cannot be used for the J aircraft. Additionally, these officials noted that the Air Force, with its constrained budgets and various weapon system priorities, has not budgeted for these funding shortfalls. According to these officials, without the needed support funding, it is possible that some C-130J aircraft may have to be cannibalized to support others in the fleet or the unsupported C-130Js may have to be parked on the ramp at some locations. The latest Air Force funding shortfall document reported a cumulative logistics support shortfall through fiscal year 2003 of $302.11 million for the 23 C-130J aircraft on contract through 1998, the 1 requested in 1999, and the 2 that are expected to be bought in 2002 and 2003. Table 2 presents the annual and cumulative logistic support funding shortfalls associated with the C-130J program as of January 7, 1998. C-130J program officials told us that the lack of commonality of the J with the existing fleet is causing the Air Force to fund the following. Interim Contractor Support (ICS). This includes not only the typical ICS costs such as on-site contractor personnel, technical data, and repair of reparables, but also a commercial supply support system. This support system is needed because, unlike the previous C-130 models, the Air Force does not yet have a database to determine the mean time between failure rates of the C-130J spares. As a result, the correct amount of spares to maintain the fleet’s mission capable rate is not known. Hence, provisioning for the C-130J will be contracted out with this contractor support supply system. C-130J training systems (simulators) and the associated costs of training flight and maintenance crews. Current plans are to buy five flight simulators for pilot training, a maintenance trainer, and a loadmaster trainer. C-130J peculiar support equipment. This is the support equipment peculiar to the J and includes new or modified support equipment like testers, and special tools needed to test, remove, replace, or handle the C-130J unique items on the aircraft. Air Force officials stated that the J’s funding problems are further exacerbated because the aircraft are being assigned to several different bases rather than a single base. Specifically, the 23 Air Force C-130Js on contract are assigned as follows: 9 WC-130Js and 4 combat delivery Js will be located at Keesler Air Force Base, Mississippi; 2 EC-130Js will be located at the Air National Guard unit in Harrisburg, Pennsylvania; and 8 combat delivery C-130Js will be located at the Air National Guard unit in Baltimore, Maryland. These different base assignments result in redundant logistical support such as maintenance and training costs at each base. Additionally, there has been much discussion between the Air Force and the Director of Operational Test and Evaluation (DOT&E) regarding the scope of Live Fire Test (LFT) for the C-130J program. An agreement was reached in March 1998, between the two and will be reflected in the C-130J Test and Evaluation Master Plan and appropriate live fire test plans. While there is currently a funding shortfall associated with the LFT program, the Air Force has agreed to fund about $5.5 million for the following tests: (1) the wing dry bay, (2) the composite propeller, (3) engine fire suppression (combat and non-combat), (4) the vulnerability analysis, and (5) the engine blade containment. DOT&E will fund the hydrodynamic ram testing and the mission abort assessment, which will be about $2.2 million. Air Force and contractor officials have been working to remedy the C-130J shortfall with such efforts as commercial supply support system, also called shared logistics. Shared logistics places high-cost, low-use support equipment at a centralized location, rather than at each base, while high usage and special mission spares are placed at each of the bases where the C-130J will be located. Air Force officials said that, according to data provided by Lockheed Martin, costs for spares would total about $20 million per base for a new aircraft like the C-130J if each C-130J base was provided a full complement of spare parts. Under the shared logistics concept, only $4 million would be required for each C-130J base compared with the previously stated $20 million. Savings from this concept have already been incorporated into the Air Force’s budget plan. Although no location has been selected for the centralized site, several have been suggested, including options for putting the centralized location where most of the planes will be based or at a location with access to overnight delivery services to facilitate just-in-time deliveries. In addition to the shared logistics savings, Congress has provided about $24 million in the fiscal year 1998 budget to help fund C-130J support shortfalls. DOD concurred with our report. DOD provided technical suggestions for clarification and we have incorporated these suggestions in the text of the report, where appropriate. The DOD comments are reprinted in appendix VI. We are sending copies of this report to appropriate congressional committees and the Secretaries of Defense and the Air Force. We will also provide copies to other interested parties upon request. Please contact me at (202) 512-4841, if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VII. The C-130J Hercules is the next generation medium-range tactical cargo and personnel aircraft that will be introduced into the existing C-130 fleet of Es and Hs. It is intended to replace aging C-130E/Hs as they approach the end of their service life. Even though the C-130 fleet has been known as the “workhorse” for the active duty Air Force, the Air National Guard, and the Air Force Reserve, the Navy and other governments use the airplanes as well. Development of the C-130J consists of the state-of-the-art technology, according to Lockheed Martin—the contractor for the “J”—and will reduce manpower requirements, operating costs, and life-cycle costs. Although the C-130J essentially has the same structural characteristics as previous models, there are some significant differences. These include the advanced two-pilot flight station with fully integrated digital avionics system with color multi-functional liquid crystal displays and head-up displays; navigation systems with dual embedded Global Positioning Systems, mission planning system, low-power color radar, digital map display, and new digital autopilot; simplified fuel system with provisions for adding a receiver aerial refueling probe or tanker aerial refueling pods; an extensive built-in test integrated diagnostics with an advisory, caution, and warning system; and higher power turboprop engines with more efficient, six-bladed all composite propellers. According to Lockheed Martin, the above enhancements will enable the airplane to climb higher and faster, fly farther at a higher cruise speed, and take off and land in a shorter distance than the existing C-130 fleet. Table I.1 presents the contractor’s comparison of the J and J-30 capabilities with those of previous models and figure I.1 shows a C-130J-30 model. Maximum payload (pounds) Maximum payload range (nautical miles) Maximum effort take off roll (feet) Cruise speed (knots) Cargo floor length (feet) Runway length/width/taxiway (feet) Figure I.1 is a picture of the C-130J-30 aircraft. To accomplish our objectives, we interviewed a number of officials within the Office of the Secretary of Defense; the Joint Chiefs of Staff; the Office of the Secretary of the Air Force; the Air Mobility Command, Scott Air Force Base, Illinois; the Air Combat Command, Langley Air Force Base, Virginia; the Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; the Air Education and Training Center, Little Rock Air Force Base, Arkansas; Air National Guard Headquarters, Washington, D.C.; Air National Guard Readiness Center, Andrews Air Force Base, Maryland; Air National Guard, Harrisburg, Pennsylvania; the Air Force Reserve Command, Robins Air Force Base, Georgia; the Warner-Robins Air Logistics Center, Robins Air Force Base, Georgia; Air Force Reserve Components in Baltimore and Minneapolis; Lockheed Martin, Arlington, Virginia; Air Force Audit Agency; the National Science Foundation, Virginia; and the Defense Contract Management Command, Marietta, Georgia. To ascertain the mission of the current and planned C-130 fleet, we reviewed the Air Combat Command’s C-130 Total Force Plan Briefing, C-130 Combat Delivery Mission Area Plan, and Combat Air Forces Concept of Operations for Theater Airlift; the Air Mobility Command’s 1998 Air Mobility Master Plan; Operational Requirements Documents for the various C-130 model designs; the Joint Chiefs of Staffs Intratheater Lift Analysis; the Air Force C-130 Master Stationing Plan; prior and current C-130 Selected Acquisition Reports; and Air Force headquarters’ written responses in this area. To obtain the Air National Guard and Air Force Reserve C-130 requirement—including current and planned inventory and the C-130 procurement history for these units, we obtained such information from the headquarters Air National Guard, Washington, D.C.; the Air Force Reserve Command, Robins Air Force Base, Georgia; and the Air Logistics Center, Warner-Robins Air Force Base, Macon, Georgia. To ascertain the Air Force plans for retiring C-130s identified as excess aircraft in the C-130 Master Stationing Plan, we reviewed the Final C-130 Master Stationing Plan, and Public Law 103-335, section 8125, which requires the plan. In addition, we obtained written responses from Air Force headquarters, Air National Guard, Air Force Reserves headquarters, and the Air Mobility Command on this matter. To determine the effectiveness of the Air Force’s system for retiring old aircraft when new C-130s enter the fleet, we reviewed listings of modifications in the C-130 System Program Offices’ Time Compliance Technical Orders that were done to C-130B and E models retired since 1978, and applicable laws and regulations regarding modifying and retiring aircraft. We also obtained views from C-130 program officials on how retirement of the fleet was done in the past and how they expect it will be done in the future. To determine the Air Force requirement/justification for the C-130J aircraft and whether or not alternatives to buying the new J model were considered, we reviewed the C-130J Operational Requirement Document; the Single Acquisition Management Plan and other applicable program documentation; Senate Report 104-267, which required the Secretary of Defense to report by March 1997 on the benefits of remanufacturing the C-130 fleet and the Under Secretary of Defense for Acquisition and Technology’s April 29, 1997, letter to congressional defense committees on this subject; Wright-Patterson Air Force Base’s assessment of an unsolicited proposal to remanufacture the C-130 fleet; and data provided by Air Force headquarters regarding the requirement for the program. We also toured the C-130J-30 on display at Ronald Reagan National Airport. To ascertain the Air Force logistic support funding needs for the C-130J aircraft, we reviewed the October 1995 and November 1996 C-130J contracts and applicable documentation for subsequent options that were exercised, and the quarterly C-130J Defense Acquisition Executive Summary Report for the C-130J program. We also obtained views, perspectives, and supporting documentation from officials at Air Force headquarters, Air Combat Command, Air Mobility Command, and the C-130J System Program Office at Wright-Patterson Air Force Base regarding the reasons for the funding shortfalls and initiatives/efforts to reduce the shortfalls. We conducted this review from January 1997 to March 1998 in accordance with generally accepted government auditing standards. C-130 model: C-130E and H Hercules (Combat delivery models) Commands: Air Mobility Command, Air Combat Command, Air Force Reserve, Air Education and Training, Air National Guard, and Air Force Special Operations Command Mission: The C-130 Hercules combat delivery models perform the intratheater portion of the airlift mission. Their primary mission is to provide rapid transportation of personnel or cargo for delivery by parachute to hostile areas, or by landing at rough, dirt strips within those areas. The C-130 E/H models can also be used as tactical transports and can be readily converted for aeromedical evacuation or aerial delivery missions. The C-130 is the primary tactical aeromedical evacuation aircraft. During peacetime, it joins on mercy flights throughout the world, bringing food, clothing, shelter, doctors, nurses, and medical supplies as well as moving victims to safety. Special equipment/features: The C-130H is generally similar to the E model but has updated turboprops, a redesigned outer wing, updated avionics, and other minor improvements. In its airlift configuration, the C-130E/H can carry up to 92 combat troops with equipment, 64 paratroopers, 74 litter patients, or 6 standard 463-L pallets. It can transport various configurations of rolling stock, including some oversize vehicles. C-130 model: AC-130H Spectre Command: Air Force Special Operations Command Mission: The AC-130H is a gunship with primary missions of close air support, air interdiction, and armed reconnaissance. Additional missions include perimeter and point defense, escort, landing, drop and extraction zone support, forward air control, limited command and control, and combat search and rescue. Special equipment/features: These heavily armed aircraft incorporate side-firing weapons integrated with sophisticated sensor, navigation, and fire control systems to provide firepower or area saturation during extended periods, at night, and in adverse weather. The sensor suite consists of a low light level television sensor and an infrared sensor. Radar and electronic sensors also give the gunship a method of positively identifying friendly ground forces as well as effective ordnance delivery during adverse weather conditions. Navigational devices include an inertial navigation system and global positioning system. C-130 model: AC-130U Spectre Gunship Command: Air Force Special Operations Command Mission: The AC-130U’s primary missions are nighttime close air support for special operations and conventional ground forces; air interdiction; armed reconnaissance; air base, perimeter, and point defense; land, water, and heliborne troop escort; drop, landing, and extraction zone support; forward air control; limited airborne command and control; and combat search and rescue support. Special equipment/features: The AC-130U has one 25-millimeter Gatling gun, one 40-millimeter cannon, and one 105-millimeter cannon for armament and is the newest addition to the Air Force Special Operations Command’s fleet. This heavily armed aircraft incorporates side-firing weapons integrated with sophisticated sensor, navigation, and fire control systems to provide firepower or area saturation at night and in adverse weather. The sensor suite consists of an all light level television system and an infrared detection set. A multi-mode strike radar provides extreme long-range target detection and identification. The fire control system offers a dual target attack capability, whereby two targets up to 1 kilometer apart can be simultaneously engaged by two different sensors, using two different guns. Navigational devices include the inertial navigation system and global positioning system. The aircraft is pressurized, enabling it to fly at higher altitudes and allowing for greater range than the AC-130H. Defensive systems include a countermeasures dispensing system that releases chaff and flares to counter radar infrared guided anti-aircraft missiles. Also infrared heat shields mounted underneath the engines disperse and hide engine heat sources from infrared guided anti-aircraft missiles. C-130 model: EC-130E “Command Solo” Command: Air National Guard Mission: EC-130E Commando Solo, the Air Force’s only airborne radio and television broadcast mission, is assigned to the 193rd Special Operations Wing, the only Air National Guard unit assigned to the Air Force Special Operations Command. Commando Solo conducts psychological operations and civil affairs broadcasts. The EC-130E flies during either day or night scenarios and is air refuelable. Commando Solo provides an airborne broadcast platform for virtually any contingency, including state or national disasters or other emergencies. Secondary missions include command and control communications countermeasures and limited intelligence gathering. Special equipment/features: Highly specialized modifications include enhanced navigation systems, self-protection equipment, and the capability to broadcast color television on a multitude of worldwide standards. C-130 model: EC-130E Airborne Battlefield Command and Control Center (ABCCC) Command: Air Combat Command Mission: The EC-130E is a modified C-130 “Hercules” aircraft designed to carry the ABCCC capsules. While functioning as an extension of ground-based command and control authorities, the primary mission is providing flexibility in the overall control of tactical air resources. In addition to maintaining control of air operations, ABCCC can provide communications to higher headquarters, including national command authorities, in both peace and wartime environments. Special equipment/features These one of a kind aircraft include the addition of external antennae to accommodate the vast number of radios in the capsule, heat exchanger pods for additional air conditioning, an aerial refueling system, and special mounted rails for uploading and downloading the capsule. The ABCCC system is a high-tech automated airborne command and control facility featuring computer generated color displays, digitally controlled communications, and rapid data retrieval. The platform’s 23 fully securable radios, secure teletype, and 15 automatic fully computerized consoles, allow the battlestaff to analyze current combat situations and direct offensive air support. C-130 model: EC-130H “Compass Call” Commands: Air Combat Command and Air Force Materiel Command Mission: Compass Call is the designation for a modified version of the C-130 “Hercules” aircraft configured to perform tactical command, control, and communications countermeasures. Specifically, the aircraft uses noise jamming to prevent communication or the transfer of information essential to command and control of weapon systems and other resources. It primarily supports tactical air operations but also can provide jamming support to ground force operations. Special equipment/features Modifications to the aircraft include an electronic countermeasures system (Rivet Fire), air refueling capability, and associated navigation and communications systems. Rivet Fire demonstrated its effect on enemy command and control networks in Panama and Iraq. Commands: Air Combat Command, Air Force Reserve, and Air National Guard Mission: The HC-130H/N’s mission is search and rescue. The HC-130P does aerial refueling of combat search and rescue helicopters and deployment of para-rescuemen. The HC-130P deploys worldwide to provide combat search and rescue coverage for U.S. and allied forces. Combat search and rescue missions include flying low-level, preferably at night aided with night vision goggles, to an area where aerial refueling of a rescue helicopter is performed or para-rescuemen are deployed. The secondary mission of the HC-130P is peacetime search and rescue. HC-130P aircraft and crews are trained and equipped for search and rescue in all types of terrain, including arctic, mountain, and maritime. Peacetime search and rescue missions may include searching for downed or missing aircraft, sinking or missing water vessels, or missing persons. The HC-130P can deploy para-rescuemen to a survivor, escort helicopters to a survivor, or airdrop survival equipment. Special equipment/features: H/N aircraft are equipped with an advanced avionics package. Improvements are being made to the HC-130P to provide improved navigation, enhanced communications, better threat detection, and more effective countermeasures systems. When fully modified, the HC-130P will have a self-contained navigation system, including an inertial system and global positioning system. It will also have a missile warning system, radar warning receiver, and associated chaff and flare dispenser systems. Command: Air National Guard Mission: The primary mission of this model is Arctic support. Two specific missions are support of (1) the National Science Foundation in Antarctica and (2) assorted national and international scientific activities in Greenland. (The Navy also operates seven LC-130 aircraft in Antarctica. These aircraft move large amounts of cargo, personnel, and fuel throughout the continent.) Special equipment/features: LC-130s are specially equipped with landing gear wheel/ski modification for operation in Arctic regions. C-130 model: MC-130E Combat Talon I and MC-130H Combat Talon II Commands: Air Force Special Operations Command, Air Force Reserve, and Air Education and Training Command Mission: The mission of the Combat Talon I/II is to provide global, day, night, and adverse weather capability to airdrop and airland personnel and equipment in support of U.S. and allied special operations forces. The MC-130E also has a deep penetrating helicopter refueling role during special operations missions. Special equipment/features: These aircraft are equipped with in-flight refueling equipment, terrain-following, terrain-avoidance radar, an inertial and global positioning satellite navigation system, and a high-speed aerial delivery system. The special navigation and aerial delivery systems are used to locate small drop zones and deliver people or equipment with greater accuracy and at higher speeds than possible with a standard C-130. The aircraft is able to penetrate hostile airspace at low altitudes and crews are specially trained in night and adverse weather operations. Nine of the MC-130Es are equipped with surface-to-air Fulton air recovery system, a safe, rapid method of recovering personnel or equipment from either land or water. It involves use of a large, helium-filled balloon used to raise a 450-foot nylon lift line. The MC-130E flies toward the lift line and snags it with scissors-like arms located on the aircraft nose. The person or equipment is lifted off, experiencing less shock than that caused by a parachute opening. Aircrew members then use a hydraulic winch to pull the person or equipment aboard through the open rear cargo door. The MC-130H features highly automated controls and displays to reduce crew size and workload. C-130 model: MC-130P Combat Shadow Commands: Air Force Special Operations Command, Air Education and Training Command, and Air Force Reserve Mission: The MC-130P Combat Shadow flies clandestine or low visibility, low-level missions into politically sensitive or hostile territory to provide air refueling for special operations helicopters. The MC-130P primarily flies its single- or multi-ship missions at night to reduce detection and intercept by airborne threats. Secondary mission capabilities include airdrop of small special operations teams, small bundles, and rubber raiding craft; night-vision goggle takeoffs and landings; tactical airborne radar approaches; and in-flight refueling as a receiver. Special equipment/features: When modifications are complete in fiscal year 1999, all MC-130P aircraft will feature improved navigation, communications, threat detection, and countermeasures systems. When fully modified, the Combat Shadow will have a fully integrated inertial navigation and global positioning system, and night-vision goggle-compatible interior and exterior lighting. It will also have a forward-looking infrared radar, missile and radar warning receivers, chaff and flare dispensers, and night-vision goggle-compatible heads-up display. In addition, it will have satellite and data burst communications, as well as in-flight refueling capability as a receiver. The Combat Shadow can fly in the day against a reduced threat; however, crews normally fly night, low-level, air refueling and formation operations using night-vision goggles. C-130 model: NC-130A, E, H Command: Air Force Materiel Command Mission: Test aircraft. Command: Air Force Reserve Mission: The WC-130 Hercules is a high-wing, medium-range aircraft used for weather reconnaissance missions. It is a modified version of the C-130 configured with computerized weather instrumentation for penetration of severe storms to obtain data on storm movements, dimensions, and intensity. The WC-130 is flown exclusively from Keesler Air Force Base by Air Force Reserve organizations known as the Hurricane Hunters. The hurricane reconnaissance area includes the Atlantic Ocean, Caribbean Sea, Gulf of Mexico, and central Pacific Ocean areas. The WC-130 is capable of staying aloft nearly 18 hours during missions. It is equipped with two external 1,400 gallon fuel tanks, an internal 1,800 gallon fuel tank, and uprated engines. An average weather reconnaissance mission might last 11 hours and cover almost 3,500 miles while the crew collects and reports weather data. Special equipment/features: Weather equipment aboard the aircraft provides a high-density, high accuracy horizontal atmospheric sensing capability. Sensors installed on the aircraft measure outside temperature, humidity, absolute altitude of the aircraft, pressure altitude, wind speed, and direction once per second. This information, along with an evaluation of other meteorological conditions, turbulence, icing, radar returns and visibility, is encoded by the on-board meteorologist and transmitted by satellite to the National Hurricane Center. Special equipment measures the atmosphere vertically by using an expendable instrument, which is dropped from the aircraft. The 16-inch long cylinder is dropped every 400 miles while on a weather track and in the center or eye of a hurricane. A vertical atmospheric profile of pressure, temperature, humidity, barometric pressure, wind speed, and direction is received from the instrument as it descends to the ocean surface, slowed and stabilized by a small parachute. From this information, the system operator analyzes and encodes data for satellite transmission to the National Hurricane Center. Baltimore, Md. Quonset, R.I. Channel Island ANG Sta, Calif. Reno, Nev. Peoria, Ill. Little Rock, Ark. Selfridge, Mich. Schenectady, N.Y. Nashville, Tenn. Charleston, W.Va. Louisville, Ky. Minneapolis/St. Paul, Minn. Dallas, Tex. Oklahoma City, Okla. St. Joseph, Mo. Charlotte, N.C. Cheyenne, Wyo. Savannah, Ga. Wilmington, Del. Martinsburg, W.Va. Mansfield Lahm Airport, Ohio McEntire, S.C. New Orleans, La. Harrisburg, Pa. Suffolk, N.Y. Moffet NAS, Calif. Schenectady, N.Y. Portland IAP, Oreg. Patrick AFB, Fla. Eglin AFB, Fla. Minneapolis/St. Paul, Minn. Keesler AFB, Miss. Willow Grove, Pa. Gen. Mitchell IAP, Wis. Pittsburgh, Pa. Dobbins, Ga. Niagara Falls, N.Y. Peterson AFB, Colo. Maxwell AFB, Ala. Patrick AFB, Fla. Portland IAP, Oreg. Eglin AFB, Fla. Eglin AFB, Fla. Keesler AFB, Miss. On December 16, 1996, an unsolicited proposal was submitted to the Air Force to modernize the C-130 fleet. The proposal anticipated a 21-month schedule to fabricate prototypes at a firm fixed-price of $50 million, with projected potential fleet-wide savings of $6 billion. The C-130 Program Office review of the unsolicited proposal concluded that, although the proposal was technically feasible, it was impractical due to cost, schedule, and technical risks. The actual evaluation is labeled FOR OFFICIAL USE ONLY, precluding a detailed explanation of those risks in this report. However, generic examples of the risks included: aggressive concurrency in program schedule; reliance on reverse engineering in lieu of original manufacturer equipment data because of proprietary rights of original manufacturer; use of unproven technology; inadequate support equipment, manuals, training, and spares for the prototype, and for the test and evaluation effort; inadequate software development and integration for an undefined avionics suite, including lack of crew-member workload analysis; an additional $15 million for the test and evaluation effort would be required over the firm fixed-price proposal of $50 million; and insufficient substantiation of the $6-billion claimed savings. In recommending nonapproval of the unsolicited proposal, the C-130 Program Office also cited the lack of program requirement, funding, and direction for the proposed C-130 program as additional reasons for rejection. Finally, the Program Office concluded that the proposal was not unique and innovative as prescribed in the Federal Acquisition Regulation for unsolicited proposals. Hence, even if the proposal was acceptable, it would not qualify for an exception to full and open competition. Daniel Hauser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Air Force's C-130 program, focusing on: (1) the mission of the current and planned C-130 fleet; (2) the C-130 requirements for the Air National Guard and Air Force Reserve; (3) the C-130 procurement history in the Guard and Reserve units; (4) the Air Force's plans for retiring excess C-130s in the Mastering Station Plan (MSP); (5) whether the Air Force's process for retiring C-130 aircraft when replacement aircraft become available is effective; (6) what the Air Force C-130J requirement is and what other alternatives were considered; and (7) the C-130J logistics support funding shortfall. GAO noted that: (1) the current C-130 fleet is comprised of 12 different variants and the missions vary with each variant; (2) while most of the current fleet is comprised of combat delivery aircraft, many of the C-130 variants perform specialized missions; (3) at the time of GAO's review, peacetime and wartime requirements for the Air National Guard and Air Force Reserve combat delivery aircraft totalled 264 aircraft; (4) requirements for the Guard and Reserves' C-130 combat delivery aircraft are established in the Air Force's C-130 MSP, which was delivered to Congress in 1997; (5) for the past 21 years with the exception of five aircraft, Congress has directed the procurement of C-130s for the Air National Guard and Air Force Reserve units; (6) according to C-130 program officials, the Air Force has not requested these aircraft because aircraft in those units have many years of service life remaining; (7) about 50 C-130 aircraft were identified in the Air Force MSP as excess over requirements; (8) thirty of these were in the Air National Guard and the Air Force Reserve units and the remaining were in the active duty force; (9) reductions in the Air National Guard were expected to be 24 aircraft and the Air Force Reserve Command units were to be reduced by 6 aircraft; (10) according to Air Force officials, these reductions were not made; (11) although the Air Force has a process for governing the retirement of its aircraft, it has not been able to implement the process effectively; (12) as a result, some C-130 aircraft have been retired with substantial service life remaining and shortly after the aircraft had been modified; (13) as of March 1998, the Air Force had not decided how many C-130Js will be required; (14) the Air Force has been requesting one or two C-130Js per year since 1996 for the active force; (15) the remaining J acquisitions were congressionally-directed buys for the Guard and Reserve; (16) regarding alternatives to the J, GAO was told that alternatives have been evaluated and rejected in the past; and (17) Air Force C-130 officials stated that funding shortfalls for the C-130 fleet have historically been a problem, primarily because Congress has added C-130 aircraft to their budget without providing the needed funding for logistics support.
Hepatitis C was first recognized as a unique disease in 1989. It is the most common chronic blood-borne infection in the United States and is a leading cause of chronic liver disease. The virus causes a chronic infection in 85 percent of cases. Hepatitis C, which is the leading indication for liver transplantation, can lead to liver cancer, cirrhosis (scarring of the liver), or end-stage liver disease. Most people infected with hepatitis C are relatively free of physical symptoms. While hepatitis C antibodies generally appear in the blood within 3 months of infection, it can take 15 years or longer for the infection to develop into cirrhosis. Blood tests to detect the hepatitis C antibody, which became available in 1992, have helped to virtually eliminate the risk of infection through blood transfusions and have helped curb the spread of the virus. Many individuals were already infected, however, and because many of them have no symptoms, they are unaware of their infection. Hepatitis C continues to be spread through blood exposure, such as inadvertent needle-stick injuries in health care workers and through the sharing of needles by intravenous drug abusers. Early detection of hepatitis C is important because undiagnosed persons miss opportunities to safeguard their health by unknowingly behaving in ways that could speed the progression of the disease. For example, alcohol use can hasten the onset of cirrhosis and liver failure in those infected with the hepatitis C virus. In addition, persons carrying the virus pose a public health threat because they can infect others. The Centers for Disease Control and Prevention estimates that nearly 4 million Americans are infected with the hepatitis C virus. Approximately 30,000 new infections occur annually. The prevalence of hepatitis C infection among veterans is unknown, but limited survey data suggest that hepatitis C has a higher prevalence among veterans who are currently using VA’s health care system than among the general population because of veterans’ higher frequency of risk factors. A 6 year study—1992–1998— of veterans who received health care at the VA Palo Alto Health Care System in Northern California reported that hepatitis C infection was much more common among veterans within a very narrow age distribution—41 to 60 years of age—and intravenous drug use was the major risk factor. VA began a national study of the prevalence of hepatitis C in the veteran population in October 2001. Data collection for the study has been completed but results have not been approved for release. The prevalence of hepatitis C among veterans could have a significant impact on current and future VA health care resources, because hepatitis C accounts for over half of the liver transplants needed by VA patientscosting as much as $140,000 per transplantand the drug therapy to treat hepatitis C is costlyabout $13,000 for a 48-week treatment regimen. In the last few years, considerable research has been done concerning hepatitis C. The National Institutes of Health (NIH) held a consensus development conference on hepatitis C in 1997 to assess the methods used to diagnose, treat, and manage hepatitis C infections. In June 2002, NIH convened a second hepatitis C consensus development conference to review developments in management and treatment of the disease and identify directions for future research. This second panel concluded that substantial advances had been made in the effectiveness of drug therapy for chronic hepatitis C infection. VA’s Public Health Strategic Healthcare Group is responsible for VA’s hepatitis C program, which mandates universal screening of veterans to identify at-risk veterans when they visit VA facilities for routine medical care and testing of those with identified risk factors, or those who simply want to be tested. VA has developed guidelines intended to assist health care providers who screen, test, and counsel veterans for hepatitis C. Providers are to educate veterans about their risk of acquiring hepatitis C, notify veterans of hepatitis C test results, counsel those infected with the virus, help facilitate behavior changes to reduce veterans’ risk of transmitting hepatitis C, and recommend a course of action. In January 2003, we reported that VA medical facilities varied considerably in the time that veterans must wait before physician specialists evaluate their medical conditions concerning hepatitis C treatment recommendations. To assess the effectiveness of VA’s implementation of its universal screening and testing policy, VA included performance measures in the fiscal year 2002 network performance plan. Network performance measures are used by VA to hold managers accountable for the quality of health care provided to veterans. For fiscal year 2002, the national goal for testing veterans identified as at risk for hepatitis C was established at 55 percent based on preliminary performance results obtained by VA. To measure compliance with the hepatitis C performance measures, VA uses data collected monthly through its External Peer Review Program, a performance measurement process under which medical record reviewers collect data from a sample of veterans’ computerized medical records. Development of VA’s computerized medical record began in the mid-1990s when VA integrated a set of clinical applications that work together to provide clinicians with comprehensive medical information about the veterans they treat. Clinical information is readily accessible to health care providers at the point of care because the veteran’s medical record is always available in VA’s computer system. All VA medical facilities have computerized medical record systems. Clinical reminders are electronic alerts in veterans’ computerized medical records that remind providers to address specific health issues. For example, a clinical reminder would alert the provider that a veteran needs to be screened for certain types of cancer or other disease risk factors, such as hepatitis C. In July 2000, VA required the installation of hepatitis C clinical reminder software in the computerized medical record at all facilities. This reminder alerted providers when they opened a veteran’s computerized medical record that the veteran needed to be screened for hepatitis C. In fiscal year 2002, VA required medical facilities to install an enhanced version of the July 2000 clinical reminder. The enhanced version alerts the provider to at-risk veterans who need hepatitis C testing, is linked directly to the entry of laboratory orders for the test, and is satisfied once the hepatitis C test is ordered. Even though VA’s fiscal year 2002 performance measurement results show that it tested 62 percent of veterans identified to be at risk for hepatitis C, exceeding its national goal of 55 percent, thousands of veterans in the sample who were identified as at risk were not tested. Moreover, the percentage of veterans identified as at risk who were tested varied widely among VA’s 21 health care networks. Specifically, we found that VA identified in its performance measurement sample 8,501 veterans nationwide who had hepatitis C risk factors out of a sample of 40,489 veterans visiting VA medical facilities during fiscal year 2002. VA determined that tests were completed, in fiscal year 2002 or earlier, for 62 percent of the 8,501 veterans based on a review of each veteran’s medical record through its performance measurement process. For the remaining 38 percent (3,269 veterans), VA did not complete hepatitis C tests when the veterans visited VA facilities. The percentage of identified at-risk veterans tested for hepatitis C ranged, as table 1 shows, from 45 to 80 percent for individual networks. Fourteen of VA’s 21 health care networks exceeded VA’s national testing performance goal of 55 percent, with 7 networks exceeding VA’s national testing performance level of 62 percent. The remaining 7 networks that did not meet VA’s national performance goal tested from 45 percent to 54 percent of at-risk veterans. VA’s fiscal year 2002 testing rate for veterans identified as at risk for hepatitis C reflects tests performed in fiscal year 2002 and in prior fiscal years. Thus, a veteran who was identified as at risk and tested for hepatitis C in fiscal year 1998 and whose medical record was reviewed as part of the fiscal year 2002 sample would be counted as tested in VA’s fiscal year 2002 performance measurement result. As a result of using this cumulative measurement, VA’s fiscal year 2002 performance result for testing at-risk veterans who visited VA facilities in fiscal year 2002 and need hepatitis C tests is unknown. To determine if the testing rate is improving for veterans needing hepatitis C tests when they were seen at VA in fiscal year 2002, VA would also need to look at a subset of the sample of veterans currently included in its performance measure. For example, when we excluded veterans from the sample who were tested for hepatitis C prior to fiscal year 2002, and included in the performance measurement sample only those veterans who were seen by VA in fiscal year 2002 and needed to be tested for hepatitis C, we found Network 5 tested 38 percent of these veterans as compared to Network 5’s cumulative performance measurement result of 60 percent. We identified three factors that impeded the process used by our case study network, VA’s Network 5 (Baltimore), for testing veterans identified as at risk for hepatitis C. The factors were tests not being ordered by the provider, ordered tests not being completed, and providers being unaware that needed tests had not been ordered or completed. More than two- thirds of the time, veterans identified as at risk were not tested because providers did not order the test, a crucial step in the process. The remainder of these untested veterans had tests ordered by providers, but the actual laboratory testing process was not completed. Moreover, veterans in need of hepatitis C testing had not been tested because providers did not always recognize during subsequent clinic visits that the hepatitis C testing process had not been completed. These factors are similar to those we identified and reported in our testimony in June 2001. Primary care providers and clinicians in Network 5’s three facilities offered two reasons that hepatitis C tests were not ordered for over two- thirds of the veterans identified as at risk but not tested for hepatitis C in the Network 5 fiscal year 2002 performance measurement sample. First, facilities lacked a method for clear communication between nurses who identified veterans’ risk factors and providers who ordered hepatitis C tests. For example, in two facilities, nurses identified veterans’ need for testing but providers were not alerted through a reminder in the computerized medical record to order a hepatitis C test. In one of these facilities, because nursing staff were at times delayed in entering a note in the computerized medical record after screening a veteran for hepatitis C risk factors, the provider was unaware of the need to order a test for a veteran identified as at risk. The three network facilities have changed their practices for ordering tests, and as of late 2002, nursing staff in each of the facilities are ordering hepatitis C tests for at-risk veterans. The second reason for tests not being ordered, which was offered by a clinician in another one of the three Network 5 facilities, was that nursing staff did not properly complete the ordering procedure in the computer. Although nurses identified at-risk veterans using the hepatitis C screening clinical reminder in the medical record, they sometimes overlooked the chance the reminder gave them to place a test order. To correct this, nursing staff were retrained on the proper use of the reminder. For the remaining 30 percent of untested veterans in Network 5, tests were not completed for veterans who visited laboratories to have blood drawn after hepatitis C tests were ordered. One reason that laboratory staff did not obtain blood samples for tests was because more than two-thirds of the veterans’ test orders had expired by the time they visited the laboratory. VA medical facilities consider an ordered test to be expired or inactive if the veteran’s visit to the laboratory falls outside the number of days designated by the facility. For example, at two Network 5 facilities, laboratory staff considered a test order to be expired or inactive if the date of the order was more than 30 days before or after the veteran visited the laboratory. If the veteran’s hepatitis C test was ordered and the veteran visited the laboratory to have the test completed 31 days later, the test would not be completed because the order would have exceeded the 30- day period and would have expired. Providers can also select future dates as effective dates. If the provider had designated a future date for the order and the veteran visited the laboratory within 30 days of that future date, the order would be considered active. Another reason for incomplete tests was that laboratory staff overlooked some active test orders when veterans visited the laboratory. VA facility officials told us that laboratory staff could miss test orders, given the many test orders some veterans have in their computerized medical records. The computer package used by laboratory staff to identify active test orders differs from the computer package used by providers to order tests. The laboratory package does not allow staff to easily identify all active test orders for a specific veteran by creating a summary of active test orders. According to a laboratory supervisor at one facility, the process for identifying active test orders is cumbersome because staff must scroll back and forth through a list of orders to find active laboratory test orders. Further complicating the identification of active orders for laboratory staff, veterans may have multiple laboratory test orders submitted on different dates from several providers. As a result, when the veteran visits the laboratory to have tests completed, instead of having a summary of active test orders, staff must scroll through a daily list of ordered testsin two facilities up to 60 days of ordersto identify the laboratory tests that need to be completed. Network and facility officials are aware of, but have not successfully addressed, this problem. VA plans to upgrade the computer package used by laboratory staff during fiscal year 2005. Hepatitis C tests that were not ordered or completed sometimes went undetected for long periods in Network 5, even though veterans often made multiple visits to primary care providers after their hepatitis C risk factors were identified. Our review of medical records showed that nearly two-thirds of the at-risk veterans in Network 5’s performance measurement sample who did not have ordered or completed hepatitis C tests had risk factors identified primarily in fiscal years 2002 and 2001. All veterans identified as at risk but who did not have hepatitis C test orders visited VA primary care providers at least once after having a risk factor identified during a previous primary care visit, including nearly 70 percent who visited more than three times. Further, almost all of the at- risk veterans who had hepatitis C tests ordered but not completed returned for follow-up visits for medical care. Even when the first follow- up visits were made to the same providers who originally identified these veterans as being at risk for hepatitis C, providers did not recognize that hepatitis C tests had not been ordered or completed. Providers did not follow up by checking for hepatitis C test results in the computerized medical records of these veterans. Most of these veterans subsequently visited the laboratory to have blood drawn for other tests and, therefore, could have had the hepatitis C test completed if the providers had recognized that test results were not available and reordered the hepatitis C tests. Steps intended to improve the testing rate of veterans identified as at risk for hepatitis C have been taken in three of VA’s 21 health care networks. VA network and facility officials in the three networks we reviewed— Network 5 (Baltimore), Network 2 (Albany), and Network 9 (Nashville)— identified similar factors that impede hepatitis C testing and most often focused on getting tests ordered immediately following risk factor identification. Officials in two networks modified VA’s required hepatitis C testing clinical reminder, which is satisfied when a hepatitis C test is ordered, to continue to alert the provider until a hepatitis C test result is in the medical record. Officials at two facilitiesone in Network 5 and the other in Network 9created a safety net for veterans at risk for hepatitis C who remain untested by developing a method that looks back through computerized medical records to identify these veterans. The method has been adopted in all six facilities in Network 9; the other two facilities in Network 5 have not adopted it. VA network and facility managers in two networks we reviewed Networks 2 and 9instituted networkwide changes intended to improve the ordering of hepatitis C tests for veterans identified as at risk. Facility officials recognized that VA’s enhanced clinical reminder that facilities were required to install by the end of fiscal year 2002 only alerted providers to veterans without ordered hepatitis C tests and did not alert providers to veterans with ordered but incomplete tests. These two networks independently changed this reminder to improve compliance with the testing of veterans at risk for hepatitis C. In both networks, the clinical reminder was modified to continue to alert the provider, even after a hepatitis C test was ordered. Thus, if the laboratory has not completed the order, the reminder is intended to act as a backup system to alert the provider that a hepatitis C test still needs to be completed. Providers continue to receive alerts until a hepatitis C test result is placed in the medical record, ensuring that providers are aware that a hepatitis C test might need to be reordered. The new clinical reminder was implemented in Network 2 in January 2002, and Network 9 piloted the reminder at one facility and then implemented it in all six network facilities in November 2002. Officials at two facilities in our review searched all records in their facilities’ computerized medical record systems and found several thousand untested veterans identified as at risk for hepatitis C. The process, referred to as a “look back,” involves searching all medical records to identify veterans who have risk factors for hepatitis C but have not been tested either because the providers did not order the tests or ordered tests were not completed. The look back serves as a safety net for these veterans. The network or facility can perform the look back with any chosen frequency and over any period of time. The population searched in a look back includes all veteran users of the VA facility and is more inclusive than the population that is sampled monthly in VA’s performance measurement process. As a result of a look back, one facility manager in Network 5 identified 2,000 veterans who had hepatitis C risk factors identified since January 2001 but had not been tested as of August 2002. Facility staff began contacting the identified veterans in October 2002 to offer them the opportunity to be tested. Although officials in the other two Network 5 facilities have the technical capability to identify and contact all untested veterans determined to be at risk for hepatitis C, they have not done so. An official at one facility not currently conducting look back searches stated that the facility would need support from those with computer expertise to conduct a look back search. A facility manager in Network 9 identified, through a look back, more than 1,500 veterans who had identified risk factors for hepatitis C but were not tested from January 2001 to September 2002. The manager in this facility began identifying untested, at-risk veterans in late March 2003 and providers subsequently began contacting these veterans to arrange testing opportunities. Other Network 9 facility managers have also begun to identify untested, at-risk veterans. Given that two facilities in our review have identified over 3,000 at-risk veterans in need of testing through look back searches, it is likely that similar situations exist at other VA facilities. Although VA met its goal for fiscal year 2002, thousands of veterans at risk for hepatitis C remained untested. Problems persisted with obtaining and completing hepatitis C test orders. As a result, many veterans identified as at risk did not know if they have hepatitis C. These undiagnosed veterans risk unknowingly transmitting the disease as well as potentially developing complications resulting from delayed treatment. Some networks and facilities have upgraded VA’s required hepatitis C clinical reminder to continue to alert providers until a hepatitis C test result is present in the medical record. Such a system appears to have merit, but neither the networks nor VA has evaluated its effectiveness. Network and facility managers would benefit from knowing, in addition to the cumulative results, current fiscal year performance results for hepatitis C testing to determine the effectiveness of actions taken to improve hepatitis C testing rates. Some facilities have compensated for weaknesses in hepatitis C test ordering and completion processes by conducting look backs through computerized medical record systems to identify all at-risk veterans in need of testing. If all facilities were to conduct look back searches, potentially thousands more untested, at-risk veterans would be identified. To improve VA’s testing of veterans identified as at risk of hepatitis C infection, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to determine the effectiveness of actions taken by networks and facilities to improve the hepatitis C testing rates for veterans and, where actions have been successful, consider applying these improvements systemwide and provide local managers with information on current fiscal year performance results using a subset of the performance measurement sample of veterans in order for them to determine the effectiveness of actions taken to improve hepatitis C testing processes. In commenting on a draft of this report VA concurred with our recommendations. VA said its agreement with the report’s findings was somewhat qualified because it was based on fiscal year 2002 performance measurement results. VA stated that the use of fiscal year 2002 results does not accurately reflect the significant improvement in VA’s hepatitis C testing performanceup from 62 percent in fiscal year 2002 to 86 percent in fiscal year 2003, results that became available recently. VA, however, did not include its fiscal year 2003 hepatitis C testing performance results by individual network, and as a result, we do not know if the wide variation in network results, which we found in fiscal year 2002, still exists in fiscal year 2003. We incorporated updated performance information provided by VA where appropriate. VA did report that it has, as part of its fiscal year 2003 hepatitis C performance measurement system, provided local facility managers with a tool to assess real-time performance in addition to cumulative performance. Because this tool was not available at the time we conducted our audit work, we were unable to assess its effectiveness. VA’s written comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Veterans Affairs and other interested parties. We also will make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7101. Another contact and key contributors are listed in appendix III. To follow up on the Department of Veterans Affairs’ (VA) implementation of performance measures for hepatitis C we (1) reviewed VA’s fiscal year 2002 performance measurement results of testing veterans it identified as at risk for hepatitis C, (2) identified factors that impede VA’s efforts to test veterans for hepatitis C in one VA health care network, and (3) identified actions taken by VA networks and medical facilities intended to improve the testing rate of veterans identified as at risk for hepatitis C. We reviewed VA’s fiscal year 2002 hepatitis C testing performance results, the most recently available data at the time we conducted our work, for a sample of 8,501 veterans identified as at risk and compared VA’s national and network results for fiscal year 2002 against VA’s performance goal for hepatitis C testing. The sample of veterans identified as at risk for hepatitis C was selected from VA’s performance measurement process—also referred to as the External Peer Review Process—that is based on data abstracted from medical records by a contractor. In addition, we looked at one VA health care network’s testing rate for at-risk veterans visiting its clinics in fiscal year 2002. To test the reliability of VA’s hepatitis C performance measurement data, we reviewed 288 medical records in Network 5 (Baltimore) and compared the results against the contractor’s results for the same medical records and found that VA’s data were sufficiently reliable for our purposes. To augment our understanding of VA’s performance measurement process for hepatitis C testing, we reviewed VA documents and interviewed officials in VA’s Office of Quality and Performance and Public Health Strategic Health Care Group. To identify the factors that impede VA’s efforts to test veterans for hepatitis C, we conducted a case study of the three medical facilities located in VA’s Network 5Martinsburg, West Virginia; Washington, D.C.; and the VA Maryland Health Care System. We chose Network 5 for our case study because its hepatitis C testing performance, at 60 percent, was comparable to VA’s national performance of 62 percent. As part of the case study of Network 5, we reviewed medical records for all 288 veterans identified as at risk for hepatitis C who were included in that network’s sample for VA’s fiscal year 2002 performance measurement process. Of the 288 veterans identified as at risk who needed hepatitis C testing, VA’s performance results found that 115 veterans in VA’s Network 5 were untested. We reviewed the medical records for these 115 veterans and found hepatitis C testing results or indications that the veterans refused testing in 21 cases. Eleven veterans had hepatitis C tests performed subsequent to VA’s fiscal year 2002 performance measurement data collection. Hepatitis C test results or test refusals for 10 veterans were overlooked during VA’s data collection. As such, we consider hepatitis C testing opportunities to have been missed for 94 veterans. On the basis of our medical record review, we determined if the provider ordered a hepatitis C test and, if the test was ordered, why the test was not completed. For example, if a hepatitis C test had been ordered but a test result was not available in the computerized medical record, we determined whether the veteran visited the laboratory after the test was ordered. If the veteran had visited the laboratory, we determined if the test order was active at the time of the visit and was overlooked by laboratory staff. Based on interviews with providers, we identified the reason why hepatitis C tests were not ordered. We also analyzed medical records to determine how many times veterans with identified risk factors and no hepatitis C test orders returned for primary care visits. To determine actions taken by networks and medical facilities intended to improve the testing rate of veterans identified as at risk for hepatitis C, we expanded our review beyond Network 5 to include Network 2 and Network 9. We reviewed network and facility documents and conducted interviews with network quality managers and medical facility staff— primary care providers, nurses, quality managers, laboratory chiefs and supervisors, and information management staff. Our review was conducted from April 2002 through November 2003 in accordance with generally accepted government auditing standards. In addition to the contact named above, Carl S. Barden, Irene J. Barnett, Martha A. Fisher, Daniel M. Montinez, and Paul R. Reynolds made key contributions to this report. VA Health Care: Improvements Needed in Hepatitis C Disease Management Practices. GAO-03-136. Washington, D.C.: January 31, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. Veterans’ Health Care: Standards and Accountability Could Improve Hepatitis C Screening and Testing Performance. GAO-01-807T. Washington, D.C.: June 14, 2001. Veterans’ Health Care: Observations on VA’s Assessment of Hepatitis C Budgeting and Funding. GAO-01-661T. Washington, D.C.: April 25, 2001.
Hepatitis C is a chronic disease caused by a blood-borne virus that can lead to potentially fatal liverrelated conditions. In 2001, GAO reported that the VA missed opportunities to test about 50 percent of veterans identified as at risk for hepatitis C. GAO was asked to (1) review VA's fiscal year 2002 performance measurement results in testing veterans at risk for hepatitis C, (2) identify factors that impede VA's efforts to test veterans for hepatitis C, and (3) identify actions taken by VA networks and medical facilities to improve the testing rate of veterans at risk for hepatitis C. GAO reviewed VA's fiscal year 2002 hepatitis C performance results and compared them against VA's national performance goals, interviewed headquarters and field officials in three networks, and conducted a case study in one network. VA's performance measurement result shows that it tested, in fiscal year 2002 or earlier, 5,232 (62 percent) of the 8,501 veterans identified as at risk for hepatitis C in VA's performance measurement sample, exceeding its fiscal year 2002 national goal of 55 percent. Thousands of veterans (about one-third) of those identified as at risk for hepatitis C infection in VA's performance measurement sample were not tested. VA's hepatitis C testing result is a cumulative measure of performance over time and does not only reflect current fiscal year performance. GAO found Network 5 (Baltimore) tested 38 percent of veterans in fiscal year 2002 as compared to Network 5's cumulative performance result of 60 percent. In its case study of Network 5, which was one of the networks to exceed VA's fiscal year 2002 performance goal, GAO identified several factors that impeded the hepatitis C testing process. These factors were tests not being ordered by the provider, ordered tests not being completed, and providers being unaware that needed tests had not been ordered or completed. For more than two-thirds of the veterans identified as at risk but not tested for hepatitis C, the testing process failed because hepatitis C tests were not ordered, mostly due to poor communication between clinicians. For the remaining veterans, the testing process was not completed because orders had expired by the time veterans visited the laboratory or test orders were overlooked because laboratory staff had to scroll back and forth through daily lists, a cumbersome process, to identify active orders. Moreover, during subsequent primary care visits by these untested veterans, providers often did not recognize that hepatitis C tests had not been ordered nor had their results been obtained. Consequently, undiagnosed veterans risk unknowingly transmitting the disease as well as potential complications resulting from delayed treatment. The three networks GAO looked at--5 (Baltimore), 2 (Albany), and 9 (Nashville)--have taken steps intended to improve the testing rate of veterans identified as at risk for hepatitis C. To do this, in two networks officials modified clinical reminders in the computerized medical record to alert providers that for ordered hepatitis C tests, results were unavailable. Officials at two facilities developed a "look back" method to search computerized medical records to identify all at-risk veterans who had not yet been tested and identified approximately 3,500 untested veterans. The look back serves as a safety net for veterans identified as at risk for hepatitis C who have not been tested. The modified clinical reminder and look back method of searching medical records appear promising, but neither the networks nor VA has evaluated their effectiveness.
US-VISIT is a governmentwide program intended to enhance the security of U.S. citizens and visitors, facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. To achieve its goals, US-VISIT is to collect, maintain, and share information on certain foreign nationals who enter and exit the United States; detect fraudulent travel documents, verify traveler identity, and determine traveler admissibility through the use of biometrics; facilitate information sharing and coordination within the immigration and border management community; and identify foreign nationals who (1) have overstayed or violated the terms of their admission; (2) may be eligible to receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials. The scope of the program includes the pre-entry, entry, status, and exit of hundreds of millions of foreign national travelers who enter and leave the United States at over 300 air, sea, and land POEs. The US-VISIT program office is responsible for managing the acquisition, deployment, operation, and sustainment of US-VISIT systems in support of such DHS agencies as Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE). As of March 31, 2007, the program director reports to the Under Secretary for the National Protection and Programs Directorate. In 2003, DHS planned to deliver US-VISIT capability in 4 increments: Increment 1 (air and sea entry and exit), Increment 2 (land entry and exit), Increment 3 (land entry and exit), and Increment 4, which was to define, design, build, and implement a more strategic program capability. Since then the scope of the first three increments has changed. The current scope is Increment 1 (air and sea entry), Increment 2 (air, sea, and land entry), and Increment 3 (land entry). Increment 4 is still intended to define, design, build, and implement a more strategic program capability, which program officials stated will consist of a series of incremental releases or mission capability enhancements that will support business outcomes. In Increments 1 through 3, the program has built interfaces among existing (“legacy”) systems, enhanced the capabilities of these systems, and deployed these capabilities to air, sea, and land POEs. These first three increments have been largely pursued through existing system contracts and task orders. Increment 4 strategic system enhancements are being pursued through a systems integration contract awarded to Accenture and its partners in May 2004. Through fiscal year 2007, about $1.7 billion has been appropriated for US-VISIT. According to the Department of Homeland Security Appropriations Act, 2007, DHS may not obligate $200 million of the $362.494 million appropriated for US-VISIT in fiscal year 2007 until DHS provides the Senate and House Committees with a plan for expenditure that meets several criteria. The department has requested $462 million in fiscal year 2008 for the program. As of January 31, 2007, program officials stated that about $1.3 billion has been obligated for US-VISIT activities. A biometrically enabled US-VISIT entry capability is operating at most POEs. On January 5, 2004, the program office deployed and began operating most aspects of its planned biometric entry capability at 115 airports and 14 seaports for certain foreign nationals, including those from visa waiver countries. As of December 2006, the program office also deployed and began operating this entry capability in the secondary inspection areas of 154 of 170 land POEs. According to program officials, 14 of the remaining 16 POEs have no operational need to deploy US-VISIT because visitors subject to US-VISIT are, by regulation, not authorized to enter into the United States at these locations. The other two POEs do not have the necessary transmission lines to operate US-VISIT, and thus they process visitors manually. According to DHS, these entry capabilities have produced results. For example, as of June 15, 2007, it had more than 7,600 biometric hits in primary entry resulting in more than 1,500 people having adverse actions, such as denial of entry, taken against them. Further, about 14,000 leads were referred to ICE’s immigration enforcement unit, resulting in 315 arrests. Another potential consequence is the deterrent effect of having an operational entry capability. Although deterrence is difficult to demonstrate, officials have cited it as a byproduct of having a publicized capability at the border to screen entry on the basis of identity verification and matching against watch lists of known and suspected terrorists. Over the last few years, DHS has devoted considerable time and resources towards establishing an operational exit capability at air, sea, and land POEs. For example, between 2003 and 2006, DHS reports allocating about $250 million for exit-related efforts. Notwithstanding this considerable investment of time and resources, DHS still does not have an operational exit capability. Our prior reports have raised a number of concerns about DHS’s management of US-VISIT’s exit efforts. As we and others have reported, the absence of a biometric exit capability raises questions about what meaningful US-VISIT data are available to DHS components, such as ICE. Without this exit capability, DHS cannot ensure the integrity of the immigration system by identifying and removing those people who have overstayed their original period of admission—a stated goal of US-VISIT. Further, ICE’s efforts to ensure the integrity of the immigration system could be degraded if it continues to spend its limited resources on investigating potential visa violators who have already left the country. Between January 2004 and May 2007, the program office conducted various exit pilots at one air and one sea POE without fully deploying a biometric exit capability. Throughout this period, we have reported on the limitations in how these pilot activities were planned, defined, and justified. For example, we reported in September 2003, prior to the pilots being deployed, that DHS had not economically justified the initial US-VISIT increment (which was to include an exit capability at air and sea POEs) on the basis of benefits, costs, and risks. As a result, we recommended that DHS determine whether proposed incremental capabilities would produce value commensurate with program costs and risks. We later reported in May 2004 that DHS had not deployed a biometric exit capability to the 80 air and 14 sea POEs as part of Increment 1 deployment in December 2003, as it had originally intended. Instead, as we mention above, the pilot exit capability was deployed to only one air and one sea POE on January 5, 2004. In February 2005, we reported that the program office had not adequately planned for evaluating its exit pilot at air and sea POEs because the pilot’s evaluation scope and time line were compressed, and thus would not provide the program office with sufficient information to adequately assess the pilots and permit the selection of the best exit solution for deployment. Accordingly, we recommended that the program office reassess its plans for deploying an exit capability to ensure that the scope of the pilot provided an adequate evaluation of alternatives. A year later in February 2006, we reported that the program office had extended the pilot from 5 to 11 POEs (nine airports and two seaports) and the time frame by an additional 7 months. Notwithstanding the expanded scope and time frame, the exit pilots were not sufficiently evaluated. In particular, on average only about 24 percent of those travelers subject to US-VISIT actually complied with the exit processing steps. The evaluation report attributed this, in part, to the fact that compliance during the pilot was voluntary, and that to achieve the desired compliance rate, the exit solution would need an enforcement mechanism, such as not allowing persons to reenter the United States if they do not comply with the exit process. Despite this limitation, as of February 2006, program officials had not conducted any formal evaluation of enforcement mechanisms or their possible effect on compliance or cost, and according to the then Acting Program Director, no such evaluation would be done. Nonetheless, DHS continued to operate the exit pilots. In February 2006, we also reported that while DHS had analyzed the cost, benefits, and risks for its air and sea exit capability, the analyses did not demonstrate that the program was producing or would produce mission value commensurate with expected costs and benefits, and the costs upon which the analyses were based were not reliable. A year later, we reported that DHS had not adequately defined and justified its past investment in its air and sea exit pilots and its land exit demonstration projects, and still did not have either an operational exit capability or a viable exit solution to deploy. We further noted that exit-related program documentation did not adequately define what work was to be done or what these efforts would accomplish, did not describe measurable outcomes from the pilot or demonstration efforts, and did not indicate the related cost, schedule, and capability commitments that would be met. We recommended that planned expenditures be limited for exit pilots and demonstration projects until such investments were economically justified and until each investment had a well-defined evaluation plan. In its comments on our report, DHS agreed with our recommendation. In January 2004, DHS committed to delivering a biometric exit capability by December 2005; however, we reported that program officials concluded in January 2005 that a biometric land exit capability could not be implemented without having a major impact on land POE facilities. According to these officials, the only proven technology available to biometrically verify individuals upon exit at land POEs would necessitate mirroring the entry processes, which the program reported was “an infeasible alternative for numerous reasons, including but not limited to, the additional staffing demands, new infrastructure requirements, and potential trade and commerce impacts.” In light of these constraints, the program office tested radio frequency identification (RFID) technology as a means of recording visitors as they exit at land POEs. However, this technology was not biometrics-based. Moreover, testing and analysis at five land POEs at the northern and southern borders identified numerous performance and reliability problems, such as the failure of RFID readers to detect a majority of travelers’ tags during testing. According to program officials, no technology or device currently exists to biometrically verify persons exiting the country that would not have a major impact on land POE facilities. They added that technological advances over the next 5 to 10 years will make it possible to biometrically verify persons exiting the country without major changes to facility infrastructure and without requiring those exiting to stop and/or exit their vehicles. In November 2006, during the course of our work on, among other things, the justification for ongoing land exit demonstration projects, DHS terminated these projects. In our view, the decision was warranted because DHS had not adequately defined and justified its investment in its pilots and demonstration projects. As noted earlier, we recommended in February 2007, that planned expenditures be limited for exit pilots and demonstration projects until such investments are economically justified and until each investment has a well-defined evaluation plan. DHS agreed with our recommendation. According to relevant federal guidance, the decision to invest in a system or system component should be based on a clear definition of what capabilities, involving what stakeholders, will be delivered according to what schedule and at what cost. Moreover, such investment decisions should be based on reasonable assurance that a proposed program will produce mission value commensurate with expected costs and risks. As noted earlier, DHS funding plans have collectively allocated about $250 million to a number of exit efforts through 2006, but without having adequately defined or economically justified them. Now, in 2007, it risks repeating these same mistakes as it embarks on yet another attempt to implement a means by which to biometrically track certain foreign nationals exiting the United States, first at airports, and then at seaports, with land exit capabilities being deferred to an unspecified future time. Based on the department’s latest available documentation, it intends to spend $27.3 million ($7.3 million in fiscal year 2007 funding and $20 million in fiscal year 2006 carryover funding) on air and sea exit capabilities. However, it has not produced either the plans or the analyses that adequately define and justify how it intends to invest these funds. Rather, it has only generally described near-term deployment plans for biometric exit capabilities at air and sea POEs, and acknowledged that a near-term biometric solution for land POEs is not possible. More specifically, the US-VISIT fiscal year 2007 expenditure plan states that DHS will begin the process of planning and designing an air and sea exit solution during fiscal year 2007, focusing initially on air exit and then emulating these technology and operational experiences in completing the sea exit solution. According to this plan, air exit efforts will begin during the third quarter of fiscal year 2007, which ends in 2 days. However, US-VISIT program officials told us as recently as three weeks ago that this deadline will not be met. Moreover, no exit program plans are available that define what will be done, by what entities, and at what cost to define, acquire, deliver, deploy, and operate this capability, including plans describing expected system capabilities, defining measurable outcomes (benefits and results), identifying key stakeholder (e.g., airlines) roles/responsibilities and buy-in, and coordinating and aligning with related programs. Further, there is no analysis available comparing the life cycle costs of the air exit solution to its expected benefits and risks. The only additional information available to date is what the department characterized as a high-level schedule for air exit that we obtained on June 11, 2007. This schedule shows that business requirements and a concept of operations are to be completed by September 3, 2007; a cost-benefit analysis is to be completed by October 1, 2007; testing is to be completed by October 1, 2008; and the exit solution is to be fully deployed in 2 years (June 2009). However, the schedule does not include the underlying details supporting the timelines for such areas of activity as system design, system development, and system testing. According to program officials, more detailed schedules exist but were not provided to us because the schedules had not yet been approved by DHS. Further, while the expenditure plan states that DHS plans to integrate the air exit solution with the commercial airlines’ existing check-in processes and to integrate US-VISIT’s efforts with CBP’s pre-departure Advance Passenger Information System and the Transportation Security Administration’s (TSA’s) Secure Flight, the program office did not provide any documentation that describes what has been done with regard to these plans or what is planned relative to engaging with and obtaining buy-in from the airlines. Nevertheless, DHS plans to issue a proposed regulation requiring airlines to participate in this effort by December 17, 2007. With regard to land exit, the future is even more unclear. According to the fiscal year 2007 expenditure plan, the department has concluded that a biometric land exit capability is not practical in the short term because of the costly expansion of existing exit capacity, including physical infrastructure, land acquisition, and staffing. As a result, DHS states an intention to begin matching entry and exit records using biographic information in instances where no current collection exists today, such as in the case of individuals who do not submit their Form I-94 upon departure. According to DHS, it has also initiated discussions with its Canadian counterparts about the potential for them to collect biographical exit data at entry into Canada. Such a solution could include data sharing between the two countries and would require significant discussions on specific data elements and the means of collection and sharing, including technical, policy, and legal issues associated with this approach. However, DHS has yet to provide us with any documentation that specifies what data elements would be collected or what technical, policy, and legal issues would need to be addressed. Further, according to DHS, it has not yet determined a time frame or any cost estimates for the initiation of such a non-biometric land exit solution. --------------------------------------------------------------- In closing, we would like to emphasize the mission importance of a cost effective, biometrically enabled exit capability, and that delivering such a capability requires effective planning and justification, and rigorous and disciplined system acquisition management. To date, these activities have not occurred for DHS’s exit efforts. If this does not change, there is no reason to expect that DHS’s newly launched efforts to deliver an air and sea exit solution will produce results different from its past efforts—namely, no operational exit solution despite many years and hundreds of millions of dollars of investment. More importantly, the continued absence of an exit capability will hinder DHS’s ability to effectively and efficiently perform its border security and immigration enforcement mission. Hence, it is important that DHS approach its latest attempt to deploy its exit capabilities in the kind of rigorous and disciplined fashion that we have previously recommended. Madam Chairwoman, this concludes our statement. We would be happy to answer any questions that you or members of the subcommittee may have at this time. If you should have any questions about this testimony, please contact Randolph C. Hite at (202) 512-3439 or hiter@gao.gov, or Richard M. Stana at (202) 512-8777 or stanar@gao.gov. Other major contributors include Deborah Davis, Kory Godfrey, Daniel Gordon, David Hinchman, Kaelin Kuhn, John Mortin, and Amos Tevelow. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) has spent and continues to invest hundreds of millions of dollars each year in its U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program to collect, maintain, and share information on selected foreign nationals who enter and exit the United States at over 300 air, sea, and land ports of entry (POEs). The program uses biometric identifiers (digital finger scans and photographs) to screen people against watch lists and to verify that a visitor is the person who was issued a visa or other travel document. GAO's testimony addresses the status of US-VISIT entry and exit capabilities and DHS's management of past and future exit efforts. In developing its testimony, GAO drew from eight prior reports on US-VISIT as well as ongoing work for the committee. After investing about $1.3 billion over 4 years, DHS has delivered essentially one-half of US-VISIT, meaning that biometrically enabled entry capabilities are operating at almost 300 air, sea, and land POEs but comparable exit capabilities are not. To the department's credit, operational entry capabilities have reportedly produced results, including more than 1,500 people having adverse actions, such as denial of entry, taken against them. However, DHS still does not have the other half of US-VISIT (an operational exit capability) despite the fact that its funding plans have allocated about one-quarter of a billion dollars since 2003 to exit-related efforts. During this time, GAO has continued to cite weaknesses in how DHS is managing US-VISIT in general, and the exit side of US-VISIT in particular, and has made numerous recommendations aimed at better ensuring that the program delivers clearly defined and adequately justified capabilities and benefits on time and within budget. The prospects for successfully delivering an operational exit solution are as uncertain today as they were 4 years ago. The department's latest available documentation indicates that little has changed in how DHS is approaching its definition and justification of future US-VISIT exit efforts. Specifically, DHS has indicated that it intends to spend $27.3 million ($7.3 million in fiscal year 2007 funding and $20 million in fiscal year 2006 carryover funding) on air and sea exit capabilities. However, it has not produced either plans or analyses that adequately define and justify how it intends to invest these funds. Rather, it has only described in general terms near-term deployment plans for biometric exit capabilities at air and sea POEs, and acknowledged that a near-term biometric solution for land POEs is not possible. Beyond this high-level schedule, no other exit program plans are available that define what will be done by what entities and at what cost. In the absence of more detailed plans and justification governing its exit intentions, it is unlikely that the department's latest efforts to deliver near-term air and sea exit capabilities will produce results different from the past. Therefore, the prospects for having operational exit capabilities continue to be unclear. Moreover, the longer the department goes without exit capabilities, the more its ability to effectively and efficiently perform its border security and immigration enforcement missions will suffer. Among other things, this means that DHS cannot ensure the integrity of the immigration system by identifying and removing those people who have overstayed their original period of admission, which is a stated goal of US-VISIT. Further, DHS immigration and customs enforcement entities will continue to spend limited resources on investigating potential visa violators who have already left the country.
In the past, we have suggested four broad principles or criteria for a budget process. A process should provide information about the long-term impact of decisions, both macro—linking fiscal policy to the long-term economic outlook—and micro—providing recognition of the long-term spending implications of government commitments; provide information and be structured to focus on important macro trade- offs—e.g., between investment and consumption; provide information necessary to make informed trade-offs between missions (or national needs) and between the different policy tools of government (such as tax provisions, grants, and credit programs); and be enforceable, provide for control and accountability, and be transparent, using clear, consistent definitions. The lack of adherence to the original BEA spending constraints in recent years, the nearing expiration of BEA, and the projection of continued and large surpluses in the coming years suggest that now may be an opportune time to think about the direction and purpose of our nation’s fiscal policy. In a time of actual and projected surpluses, the goal of zero deficit no longer applies. Rather, discussion shifts toward how to allocate surpluses among debt reduction, spending increases, and tax cuts. Only then can limits on subcategories of spending be set. Will the entire social security surplus be “saved”? What about the Medicare Part A surplus? In our work on other countries that also have faced the challenge of setting fiscal policy in times of surplus, we found that as part of a broad fiscal policy framework some countries adopted fiscal targets such as debt-to-gross domestic product (GDP) ratios to serve as guides for decision-making. Complicating the discussion on formulating fiscal policy in a time of surplus is the fact that the long-term picture is not so good. Despite current projections that show surpluses continuing over the 10-year budget window, our long-term budget simulations show a resumption of significant deficits emerging after the anticipated demographic tidal wave of population aging hits. These demographic trends serve to emphasize the importance of the first principle cited above—the need to bring a long- term perspective to bear on budget debates. Keeping in mind these principles and concerns, a number of alternatives appear promising. There is a broad consensus among observers and analysts who focus on the budget both that BEA has constrained spending and that continuation of some restraint is necessary even with the advent of actual and projected surpluses. Discussions on the future of the budget process have primarily focused on revamping the current budget process rather than establishing a new one from scratch. Where discussion has moved beyond a general call for continued restraint to specific control devices, the ones most frequently discussed are (1) extending the discretionary spending caps, (2) extending the PAYGO mechanism, and (3) creating a trigger device or a set of rules specifically designed to deal with the uncertainty of budget projections. A new budget process framework could encompass any or all of these instruments. BEA distinguished between spending controlled by the appropriations process—“discretionary spending”—and that which flowed directly from authorizing legislation provisions of law—“direct spending,” sometimes called “mandatory.” Caps were placed on discretionary spending—and the Congress’ compliance with the caps was relatively easy to measure because discretionary spending totals flow directly from legislative actions (i.e., appropriations laws). There is broad consensus that, although the caps have been adjusted, they have served to constrain appropriations. This consensus combined with the belief that some restraints should be continued has led many to propose that some form of cap structure be continued as a way of limiting discretionary appropriations. However, the actions in the last 2 years have also led many to note that caps can only work if they are realistic; while caps may be seen as tighter than some would like, they are not likely to bind if they are seen as totally unreasonable given current conditions. Further, some have proposed that any extension of BEA-type caps be limited to caps on budget authority. Outlays are controlled by and flow from budget authority—although at different rates depending on the nature of the programs. Some argue that the existence of both budget authority and outlay caps has encouraged provisions such as “delayed obligations” to be adopted not for programmatic reasons but as a way of juggling the two caps. The existence of two caps may also skew authority from rapid spendout to slower spendout programs, thus pushing more outlays to the future and creating problems in complying with outlay caps in later years. Extending only the budget authority cap would eliminate the incentive for such actions and focus decisions on that which the Congress is intended to control—budget authority, which itself controls outlays. This would be consistent with the original design of BEA. Eliminating the outlay cap would raise several issues—chief among them being how to address the control of transportation programs for which no budget authority cap currently exists, and the use of advance appropriations to skirt budget authority caps. However, agreements about these issues could be reached—this is not a case where implementation difficulties need derail an idea. For example, the fiscal year 2002 budget proposes a revision to the scorekeeping rule on advance appropriations so that generally they would be scored in the year of enactment. If the Budget Committees and CBO agree, this change could eliminate the practice of using advance appropriations to skirt the caps. The obvious advantage to focusing decisions on budget authority rather than outlays is that the Congress would not spend its time trying to control that which by design is the result of its budget authority decisions—the timing of outlays. There are other issues in the design of any new caps. For example, for how long should caps be established? What categories should be established within or in lieu of an overall cap? While the original BEA envisioned three categories (Defense, International Affairs, and Domestic), over time categories were combined and new categories were created. At one time or another caps for Nondefense, Violent Crime Reduction, Highways, Mass Transit, and Conservation spending existed—many with different expiration dates. Should these caps be ceilings, or should they— as is the case for Highways and Conservation—provide for “guaranteed” levels of funding? The selection of categories—and the design of the applicable caps—is not trivial. Categories define the range of what is permissible. By design they limit trade-offs and so constrain both the Congress and the President. Because caps are phrased in specific dollar amounts, it is important to address the question of when and for what reasons the caps should be adjusted. This is critical for making the caps realistic. For example, without some provision for emergencies, no caps can be successful. At the same time, there appears to be some connection between how realistic the caps are and how flexible the definition of emergency is. As discussed in last year’s compliance report, the amount and range of spending considered “emergency” has grown in recent years. There have been a number of approaches suggested to balance the need to respond to emergencies and the desire to avoid making the “emergency” label an easy way to raise caps. In the budget resolution for fiscal year 2001 [H. Con. Res. 290], the Congress said it would limit emergencies to items meeting five criteria: (1) necessary, essential, or vital (not merely useful or beneficial), (2) sudden, quickly coming into being, and not building up over time, (3) an urgent, pressing, and compelling need requiring immediate action, (4) unforeseen, unpredictable, and unanticipated, and (5) not permanent, temporary in nature. The resolution further required any proposal for emergency spending that did not meet all the criteria to be accompanied by a statement of justification explaining why the requirement should be accorded emergency status. The fact that this provision was ignored during debates on fiscal year 2001 appropriations bills emphasizes that no procedural hurdle can succeed without the will of the Congress. Others have proposed providing for more emergency spending—either in the form of a reserve or in a greater appropriation for the Federal Emergency Management Agency (FEMA)—under any caps. If such an approach were to be taken, the amounts for either the reserve or the FEMA disaster relief account would need to be included when determining the level of the caps. Some have proposed using a 5- or 10- year rolling average of disaster/emergency spending as the appropriate reserve amount. Adjustments to the caps would be limited to spending over and above that reserve or appropriated level for extraordinary circumstances. Alternatively, with additional up-front appropriations or a reserve, emergency spending adjustments could be disallowed. Even with this kind of provision only the commitment of the Congress and the President can make any limit on cap adjustments for emergencies work. States have used this reserve concept for emergencies, and their experiences indicate that criteria for using emergency reserve funds may be useful in controlling emergency spending. Agreements over the use of the reserve would also need to be achieved at the federal level. This discussion is not exhaustive. Other issues would come up in extending BEA. Previously, we have reported on two issues—the scoring of operating leases and the expansion of user fees as offsets to discretionary spending; because I think they need to be considered, let me touch on them briefly. We have previously reported that existing scoring rules favor leasing when compared to the cost of various other methods of acquiring assets.Currently, for asset purchases, budget authority for the entire acquisition cost must be recorded in the budget up front, in the year that the asset acquisition is approved. In contrast, the scorekeeping rules for operating leases often require that only the current year’s lease costs be recognized and recorded in the budget. This makes the operating lease appear less costly from an annual budgetary perspective, and uses up less budget authority under the cap. Alternative scorekeeping rules could recognize that many operating leases are used for long-term needs and should be treated on the same basis as purchases. This would entail scoring up front the present value of lease payments covering the same period used to analyze ownership options. The caps could be adjusted appropriately to accommodate this change. Many believe that one unfortunate side effect of the structure of the BEA has been an incentive to create revenues that can be categorized as “user fees” and so offset discretionary spending—rather than be counted on the PAYGO scorecard. The 1967 President’s Commission on Budget Concepts recommended that receipts from activities that were essentially governmental in nature, including regulation and general taxation, be reported as receipts, and that receipts from business-type activities “offset to the expenditures to which they relate.” However, these distinctions have been blurred in practice. Ambiguous classifications combined with budget rules that make certain designs most advantageous has led to a situation in which there is pressure to treat fees from the public as offsets to appropriations under BEA caps, regardless of whether the underlying federal activity is business or governmental in nature. Consideration should be given to whether it is possible to come up with and apply consistent standards—especially if the discretionary caps are to be redesigned. The administration has stated that it plans to monitor and review the classification of user fees and other types of collections. The PAYGO requirement prevented legislation that lowered revenue, created new mandatory programs, or otherwise prevented direct spending from increasing the deficit unless offset by other legislative actions. As long as the unified budget was in deficit, the provisions of PAYGO—and its application—were clear. The shift to surplus raised questions about whether the prohibition on increasing the deficit also applied to reducing the surplus. Although the Congress and the executive branch have both concluded that PAYGO does apply in such a situation, any extension should eliminate potential ambiguity in the future. This year, the administration has proposed—albeit implicitly—special treatment for a tax cut. The budget states that the President’s tax plan and Medicare reforms are fully financed by the surplus and that any other spending or tax legislation would need to be offset by reductions in spending or increases in receipts. It is possible that in a time of budget surplus, the Congress might wish to modify PAYGO to permit increased direct spending or lower revenues as long as debt held by the public is planned to be reduced by some set percentage or dollar amount. Such a provision might prevent PAYGO from becoming as unrealistic as overly tight caps on discretionary spending. However, the design of such a provision would be important—how would a debt reduction requirement be specified? How would it be measured? What should be the relationship between the amount of debt reduction required and the amount of surplus reduction (i.e., tax cut or direct spending increase) permitted? What, if any, relationship should there be between this calculation and the discretionary caps? While PAYGO constrained the creation or legislative expansion of direct spending programs and tax cuts, it accepted the existing provisions of law as given. It was not designed to trigger—and it did not trigger—any examination of “the base.” Cost increases in existing mandatory programs are exempt from control under PAYGO and could be ignored. However, constraining changes that increase the cost of entitlements and mandatories is not enough. Our long-term budget simulations show that as more and more of the baby boom generation retires, spending for Social Security, Medicare, and Medicaid will demand correspondingly larger shares of federal revenues. The growth in these programs will increasingly restrict budgetary flexibility. Even if the Social Security surpluses are saved and used for debt reduction, unified deficits are projected to emerge in about two decades, and by 2030 Social Security, Medicare, and Medicaid would require more than three-fourths of federal revenues. Previously we suggested some sort of “lookback” procedure to prompt a reexamination of “the base.” Under such a process, the Congress could specify spending targets for PAYGO programs for several years. The President could be required to report in his budget whether these targets either had been exceeded in the prior year or were likely to be exceeded in the current or budget years. He could then be required to recommend whether any or all of this overage should be recouped—and if so, to propose a way to do so. The Congress could be required to act on the President’s proposal. While the current budget process contains a similar point of order against worsening the financial condition of the Social Security trust funds, it would be possible to link “tripwires” or triggers to measures related to overall budgetary flexibility or to specific program measures. For example, if the Congress were concerned about declining budgetary flexibility, it could design a tripwire tied to the share of the budget devoted to mandatory spending or to the share devoted to a major program. Other variations of this type of tripwire approach have been suggested. The 1999 Breaux-Frist proposal (S. 1895) for structural and substantive changes to Medicare financing contained a new concept for measuring “programmatic insolvency” and required congressional approval of additional financing if that point was reached. Other specified actions could be coupled with reaching a tripwire, such as requiring the Congress or the President to propose alternatives to address reforms or, by using the congressional budget process, requiring the Congress to deal with unanticipated cost growth beyond a specified tripwire by establishing a point of order against a budget resolution with a spending path exceeding the specified amount. One example of a threshold might be the percentage of GDP devoted to Medicare. The President would be brought into the process as it progressed because changes to deal with the cost growth would require enactment of a law. In previous reports we have argued that the nation’s economic future depends in large part upon today’s budget and investment decisions. In fact, in recent years there has been increased recognition of the long-term costs of Social Security and Medicare. While these are the largest and most important long-term commitments— and the ones that drive the long-term outlook—they are not the only ones in the budget. Even those programs too small to drive the long-term outlook affect future budgetary flexibility. For the Congress, the President, and the public to make informed decisions about these other programs, it is important to understand their long-term cost implications. While the budget was not designed to and does not provide complete information on long-term cost implications stemming from some of the government’s commitments when they are made, progress can be made on this front. The enactment of the Federal Credit Reform Act in 1990 represented a step toward improving both the recognition of long-term costs and the ability to compare different policy tools. With this law, the Congress and the executive branch changed budgeting for loan and loan guarantee programs. Prior to the Credit Reform Act, loan guarantees looked “free” in the budget. Direct loans looked like grant programs because the budget ignored loan repayments. The shift to accrual budgeting for subsidy costs permitted comparison of the costs of credit programs both to each other and to spending programs in the budget. Information should be more easily available to the Congress and the President about the long-term cost implications both of existing programs and new proposals. In 1997 we reported that the current cash-based budget generally provides incomplete information on the costs of federal insurance programs. The ultimate costs to the federal government may not be apparent up front because of time lags between the extension of the insurance, the receipt of premiums, and the payment of claims. While there are significant estimation and implementation challenges, accrual- based budgeting has the potential to improve budgetary information and incentives for these programs by providing more accurate and timely recognition of the government’s costs and improving the information and incentives for managing insurance costs. This concept was proposed in the Comprehensive Budget Process and Reform Act of 1999 (H.R. 853), which would have shifted budgetary treatment of federal insurance programs from a cash basis to an accrual basis. There are other commitments for which the cash- and obligation-based budget does not adequately represent the extent of the federal government’s commitment. These include employee pension programs, retiree health programs, and environmental cleanup costs. While there are various analytical and implementation challenges to including these costs in budget totals, more could be done to provide information on the long- term cost implications of these programs to the Congress, the President, and the interested public. At the request of this Committee, we are continuing to address this issue. As the budgeting horizon expands, so does the certainty of error. Few forecasters would suggest that 10-year projections are anything but that— projections of what the world would look like if it continued on a line from today. And long-term simulations are useful to provide insight as to direction and order of magnitude of certain trends—not as forecasts. Nevertheless, budgeting requires forecasts and projections. Baseline projections are necessary for measuring and comparing proposed changes. Former Congressional Budget Office (CBO) Director Rudy Penner suggested that 5-year and 10-year projections are useful for and should be used for different purposes: 5-year projections for indicating the overall fiscal health of the nation, and 10-year projections for scorekeeping and preventing gaming of the timing of costs. No 10-year projection is likely to be entirely correct; the question confronting fiscal policymakers is how to deal with the risk that a projection is materially wrong. This year some commentators and Members of the Congress have suggested dealing with this risk by using triggers. Triggers were part of both Gramm-Rudman-Hollings (GRH) and BEA. The GRH triggers were tied to deficit results and generally regarded as a failure—they were evaded or, when deficits continued to exceed the targets, the targets were changed. BEA triggers have been tied to congressional action rather than to deficit results; sequesters have rarely been triggered—and those were very small. This year the discussion of triggers has been tied specifically to the tax debate and to whether the size of the tax cut in future years should be linked to budget results in those years. There could be several variations on this trigger: actual surplus results, actual revenue results (this with the intent of avoiding a situation in which spending increases can derail a tax cut), and actual debt results. There is little consensus on the effectiveness of any triggers. Although the debate about triggers has been tied to the tax debate in 2001, there is no inherent reason to limit the discussion to taxes. Some might wish to consider triggers that would cause decisionmakers to make proposals to address fiscal results that exceed some specific target, such as debt or spending as a share of GDP. Former CBO Director Robert Reischauer suggested another way of dealing with the fact that forecasts/projections become less certain the further they go into the future. Under his proposal, a declining percentage of any projected surplus would be available—either for tax cuts or for spending increases. Specifically, 80 percent of the surplus would be available to legislators in years 1 and 2, 70 percent in years 3 and 4, 60 percent in years 5 and 6, until reaching the 40-percent level in years 9 and 10. The consequence of not adhering to these limits would be an across-the-board sequester. When a new Congress convenes, it would be given a new budget allowance to spend based on a new set of surplus projections. Others have suggested that mechanisms such as a joint budget resolution and/or an automatic continuing resolution could avert the year-end disruption caused by an inability to reach agreement on funding the government. Biennial budgeting is also sometimes suggested as a better way to budget and to provide agencies more certainty in funding over 2 years. Let me turn now to these ideas. Since agreement on overall budget targets can set the context for a productive budget debate, some have suggested that requiring the President’s signature on budget resolutions would facilitate the debate within such a framework. Proposals to replace the Concurrent Resolution with a Joint Resolution should be considered in the light of what the budget resolution represents. Prior to the 1974 act only the President had a budget—that is, a comprehensive statement of the level of revenues and spending and the allocation of that spending across “national needs” or federal mission areas. Requiring the President to sign the budget resolution means it would not be a statement of congressional priorities. Would such a change reduce the Congress’ ability to develop its own budget and so represent a shift of power from the Congress to the President? Whose hand would it strengthen? If it is really to reduce later disagreement, would it merely take much longer to get a budget resolution than it does today? It could be argued that under BEA the President and the Congress have—at times—reached politically binding agreements without a joint budget resolution. The periodic experience of government “shutdowns”—or partial shutdowns when appropriations bills have not been enacted has led to proposals for an automatic continuing resolution. The automatic continuing resolution, however, is an idea for which the details are critically important. Depending on the detailed structure of such a continuing resolution, the incentive for policymakers—some in the Congress and the President—to negotiate seriously and reach agreement may be lessened. What about someone for whom the “default position” specified in the automatic continuing resolution is preferable than the apparent likely outcome? If the goal of the automatic continuing resolution is to provide a little more time for resolving issues, it could be designed to permit the incurrence of obligations to avoid a funding gap, but not the outlay of funds to liquidate the new obligations. This would allow agencies to continue operations for a period while the Congress completes appropriations actions. Finally, you asked me to discuss proposals for biennial budgeting. Some have suggested that changing the appropriations cycle from annual to biennial could (1) provide more focused time for congressional oversight of programs, (2) shift the allocation of agency officials’ time from the preparation of budgets to improved financial management and analysis of program effectiveness, and (3) enhance agencies’ abilities to manage their operations by providing more certainty in funding over 2 years. Given the regularity with which proposals for biennial budgeting are made, I believe that at least some will consider the upcoming necessity to decide whether to extend BEA as an opportunity to again propose biennial budgeting. Whether a biennial cycle offers the benefits sought will depend heavily on the ability of the Congress and the President to reach agreement on how to respond to uncertainties inherent in a longer forecasting period, for there will always be uncertainties. How often will the Congress and the President feel the need to reopen the budget and/or change funding levels? Budgeting always involves forecasting, which in itself is uncertain, and the longer the period of the forecast, the greater the uncertainty. Our work has shown that increased difficulty in forecasting was one of the primary reasons states gave for shifting from biennial to annual cycles. The budget is highly sensitive to the economy. Economic changes during a biennium would most likely prompt the Congress to revisit its decisions and reopen budget agreements. Among the issues that would need to be worked out if the Congress moves to a biennial budget cycle are how to update the CBO forecast and baseline against which legislative action is scored and how to deal with unexpected events. The baseline is important because CBO scores legislation based on the economic assumptions in effect at the time of the budget resolution. Even under an annual system there are years when this practice presents problems: in 1990 the economic slowdown was evident during the year, but consistent practice meant that bills reported in compliance with reconciliation instructions were scored based on the assumptions in the budget resolution rather than updated assumptions. If budget resolutions were biennial, this problem of outdated assumptions would be greater—some sort of update in the “off- year” likely would be necessary. In any consideration of a biennial budget, it is important to recognize that even with annual budgets, the Congress already has provided agencies with multiyear funding to permit improved planning and management. As you know, it is not necessary to change the frequency of decisions in order to change the length of time funds are available. Nearly two-thirds of the budget is for mandatory programs and entitlements on which decisions are not made annually. Even the remaining portion that is on an annual appropriations cycle is not composed entirely of 1-year appropriations that expire on September 30 of each year. The Congress routinely provides multiyear or no-year appropriations when it seems to make sense to do so. Thus, to the extent that biennial budgeting is proposed as a way to ease a budget execution problem, the Congress has shown itself willing and able to meet that need under the current annual cycle. If BEA is extended in conjunction with biennial budgeting, a whole host of technical issues needs to be considered. Would biennial budgeting change the timing of the BEA-required sequestration report? How would sequestrations be applied to the 2 years in the biennium and when would they occur? For example, if annual caps are continued and are exceeded in the second year of the biennium, when would the Presidential Order causing the sequestration be issued? Would the sequestration affect both years of the biennium? Would forecasts and baselines be updated during the biennium? These are just of few of the many questions that would need to be resolved. Regardless of the potential benefits, the decision on biennial budgeting will depend on how the Congress chooses to exercise its constitutional authority over appropriations and its oversight functions. We have long advocated regular and rigorous congressional oversight of federal programs. Annual enacted appropriations have long been a basic means of exerting and enforcing congressional policy. Oversight has often been conducted in the context of agency requests for funds. A 2-year appropriation cycle would change—and could lessen—congressional influence over program and spending matters since the process would afford fewer scheduled opportunities to affect agency programs and budget. Biennial budgeting would bring neither the end of congressional control nor the guarantee of improved oversight. It would require a change in the nature of that control. If the Congress decides to proceed with a change to a biennial budget cycle—including a biennial appropriations cycle— careful thought will need to be given to implementation issues. To affect decision-making, the fiscal goals sought through a budget process must be accepted as legitimate. For many years the goal of “zero deficit”—or the norm of budget balance—was accepted as the right goal for the budget process. In the absence of the zero deficit goal, policymakers need an overall framework upon which a process and any targets can be based. Goals may be framed in terms of debt reduction or surpluses to be saved. In any case, compliance with budget process rules, in both form and spirit, is more likely if end goals, interim targets, and enforcement boundaries are both accepted and realistic. Enforcement is more successful when it is tied to actions controlled by the Congress and the President. Both the BEA spending caps and the PAYGO enforcement rules were designed to hold the Congress and the President accountable for the costs of the laws enacted each session—not for costs that could be attributed to economic changes or other factors. Today, the Congress and the President face a different budgetary situation than in the past few decades. The current budget challenge is not to achieve a balanced unified budget. Rather, budgeting today is done in the context of projections for continued and growing surpluses followed over the longer term by demography-driven deficits. What process will enable policymakers to deal with the near term without ignoring the long term? At the same time, the challenges for any budget process are the same: What process will enable policymakers to make informed decisions about both fiscal policy and the allocation of resources within the budget? Extending the current BEA without setting realistic caps and addressing existing mandatory programs is unlikely to be successful for the long term. The original BEA employed limited actions in aiming for a balanced budget. It left untouched those programs—direct spending and tax legislation—already in existence. Going forward with new challenges, we believe that a new process that prompts the Congress to exercise more foresight in dealing with long-term issues is needed. The budget process appropriate for the early 21st century will have to exist as part of a broader framework for thinking about near- and long-term fiscal goals.
This testimony discusses the budget process established by the Budget Enforcement Act, which will expire in fiscal year 2002. Because the goal of achieving zero deficits has been achieved, the focus of the budget process has shifted to to the allocation of surpluses among debt reduction, spending increases, and tax cuts. The budget process should be designed to avoid what has been described as the year-end "train wreck." A year-end "train wreck" results from a failure to reach agreement--or at least a compromise acceptable to all parties--earlier in the year. Although it is possible that early agreement on some broad parameters could facilitate a smoother process, it is not clear that such an agreement will always prevent gridlock--it may just come earlier. Two ideas that have been proposed to avert the year-end disruption caused by an inability to reach agreement on funding the government include joint budget resolutions and biennial budgeting. In discussing alternatives for improving the budget process, there is a broad consensus among observers and budget analysts that the spending constraints established by the act are necessary even with the advent of actual and projected surpluses. Such constraints include (1) extending the discretionary spending caps, (2) extending the pay-as-you-go mechanism, and (3) creating a trigger device or set of rules specifically designed to deal with the uncertainty of budget projections.
It is estimated that DOD employs more than 400 different tester types. This equipment is used to diagnose problems in aircraft avionics and weapon system components so that the component can be repaired and replaced on the aircraft or put into the supply system for future use. For example, testers may be used to diagnose problems with aircraft radars, guidance and control systems, or weapon systems. According to DOD, the department spent over $50 billion in its acquisition and support of ATE from 1980 through 1992, and the procurement was characterized by the proliferation of testers designed to support a specific weapon system or component. These testers are quickly becoming obsolete and more difficult and costly to maintain because they may no longer be in production and parts may not be readily available. Over the years, various studies have criticized the continued proliferation of unique ATE and highlighted the need for the development and acquisition of testers that can be used to test more than one system or component. In September 1993, the House Appropriations Committee recommended that the Secretary of Defense develop a DOD-wide policy requiring ATE commonality among the services, along with a formal implementation mechanism with sufficient authority, staffing, and funding to ensure compliance. In 1994, DOD established a policy stating that managers of DOD programs should select families of testers or commercial off-the-shelf components to meet all ATE acquisition needs and that the introduction of unique testers should be minimized. DOD designated the Navy at that time as its Executive Agent to oversee policy implementation in all services, and identified a goal of reducing life-cycle costs and providing greater ATE commonality and interoperability. Additional DOD guidance published in 1996 and 1997 required that all ATE acquisitions be part of the approved families of testers or commercial off-the-shelf. DOD faces major challenges with aging and increasingly obsolete ATE. These problems include the high costs of maintaining and replacing ATE and the declining availability of spare parts for the aging testers. In addition, several DOD organizations, including the Navy Inspector General, have suggested that aging and obsolete ATE may adversely affect aviation readiness. Departmentwide estimates of funds needed for ATE modernization and acquisition are not readily available. However, according to Air Force and Navy ATE managers, most of the services’ ATE is obsolete and will need to be upgraded or replaced over the next several years. Our study confirmed that replacement and modernization costs would be substantial. The Navy, for example, spent about $1.5 billion from fiscal years 1990 through 2002 for the acquisition of its primary family of testers and plans to spend an additional $430 million through fiscal year 2007. Additionally, the Navy estimates that it plans to spend $584 million through fiscal year 2007 to adapt existing test program sets necessary to perform specific tests of the various aircraft components supported by this family of testers. The Navy also anticipates spending an additional $584 million to develop program test sets for new weapon system requirements. Information on the Air Force’s spending for ATE modernization is somewhat sketchy, as limited data are available centrally for individual weapon systems. According to a recent study done for the Air Force, the service has not developed a plan that allows modernization funding requirements to be determined. However, estimates are available for selected systems. The F-15 fighter program office, for example, is spending approximately $325 million on just one tester that will be fielded in 2004. It also plans to upgrade its electronic warfare tester, which is one of seven primary testers for the aircraft, at a cost of over $40 million. A 2002 study of B-52 bomber ATE identified obsolescence issues associated with six of the aircraft’s seven major testers that will require more than $140 million in the near future. Similarly, the upgrade of a unique B-1 bomber tester is expected to exceed $15 million, even though the Air Force is considering replacing this tester and has already begun planning the acquisition. The latest estimate for the new tester is $190 million. Current ATE estimates for the F/A-22, which is still under development, are not available. However, estimates made early in the development phase exceeded $1.5 billion. ATE is becoming increasingly out-of-date and more difficult to support. And, according to service officials, using this outdated equipment to perform required tests in a timely manner is becoming increasingly challenging. Although the services could not quantify the extent that tester problems affect readiness, service officials noted that without adequate test equipment to diagnose problems, components cannot be repaired in a timely manner and the mission capability of military aircraft can be adversely affected. In August 2000, the Navy Inspector General identified shortfalls in ATE as having a negative impact on naval aviation and, in particular, on the availability of repaired components. During the same time frame, a Navy operational advisory group, recognizing the importance of ATE in maintaining readiness, ranked support equipment, including ATE, as one of its top 20 readiness issues. We have issued several reports in the recent past addressing the shortage of spare parts—a potential result of ATE problems. In addition, according to DOD readiness reports, only 28 percent of Air Force, Navy, and Marine Corps key aircraft models met their readiness goals in fiscal year 2002. Although difficulties in meeting these goals are caused by a complex combination of interrelated logistical and operational factors, the shortage of spare parts was a major cause. ATE plays a significant role in the supply of available spares, since this equipment affects both how many parts are taken out of service for repair and how quickly they are repaired and returned. We reported that maintenance and repair facilities routinely work around spare parts shortages by removing a working part from one aircraft to replace a nonworking part in another aircraft, a practice called “cannibalization.” And, although the services do not record increases in cannibalizations that are caused by ATE problems, the services use cannibalization as a routine maintenance practice when testers are not available or not working properly. In July 2001, we reported that as a result of ATE not working properly, unfilled requisitions were adversely affecting the mission capability of F-14 aircraft. In another case, more than 1,200 Air Force B-1 bomber components were backlogged and could not be repaired because of the same reason. Although we were unable to measure specific reductions in the readiness of F-14 and B-1 aircraft as a result of this problem, mission capable rates for the B-1 in fiscal years 1998-2002 averaged approximately 55 percent, compared with the goal of 67 percent, while mission capable rates for the F-14D, during the same period, averaged 67 percent, compared with a goal of 71 percent. Additionally, the Air Force’s 2002 B-52 study concluded that six of the seven major testers used to test B-52 components need to be modified or replaced or the availability of the aircraft will be adversely affected as early as 2006. Air Force officials believe that similar problems will continue unless the service undertakes a major ATE modernization or replacement program. Since the early 1990s, DOD policies have addressed the need for commonality in ATE acquisition and modernization. Although the services have been making some progress, efforts to comply with these policies have been slow. For example, although the Navy has developed a single family of testers to work on many of its aircraft components, after 11 years, the replacement of its obsolete testers aboard aircraft carriers and shore maintenance facilities has not been completed. In addition, strategic planning for the modernization of automatic test equipment at Navy depots has only recently been initiated. Historically, the Air Force has not had a service-level ATE standardization policy and has essentially pursued unique ATE solutions for each weapon system. Since individual aircraft program offices have been doing their own planning for modernization, the Air Force has given little consideration to having common ATE or testers that are interoperable with those of other services. Planning for the Air Force’s latest aircraft acquisition, the F/A-22, calls for the development of automatic test equipment that will be unique to that aircraft. In August 2002, the Air Force initiated a planning effort to determine its long-term servicewide ATE modernization needs. According to Navy reports, obsolete ATE results in higher backlogs and increased flying hour costs, and adversely affects aircraft readiness. The Navy recognized years ago, and prior to the establishment of DOD’s 1994 ATE standardization policy, that its ATE was becoming obsolete. In the 1980s the Navy embarked upon an ATE standardization program to replace 25 of its testers with one standard ATE family, the Consolidated Automated Support System (CASS), to minimize unique types of testers. The Navy designed CASS to be used at maintenance activities both ashore and afloat. In 1991, the Navy began to produce CASS for the general purpose testing of equipment such as radios, radars, and electro-optics. (See fig. 2.) CASS’s replacement of 25 types of obsolete testers, in support of 2,458 weapon system components, was scheduled for completion by fiscal year 2000. However, according to Navy officials, because of budget cuts that caused delays in developing the test program sets, only 4 of the 25 have been completely replaced by CASS, and 8 test sets have been partially replaced. Navy officials told us that the completion schedule has slipped to fiscal year 2008 for aircraft carriers and shore maintenance facilities and could be much longer for aviation depots. The Navy reports that the replacement of these testers with CASS stations, when complete, will reduce the number of test-related enlisted occupational specialties from 32 to 4, thus reducing training requirements. In addition, CASS will reduce the requirement for test equipment operators aboard each aircraft carrier from 105 to 54, and at the same time reduce space requirements for testers from 2,700 to 1,900 square feet. Spare parts needed to repair testers will be reduced from 30,000 to 3,800. According to Navy officials, however, the revised completion schedule will not allow for the timely replacement of aging ATE, and these delays will adversely affect aircraft readiness. In addition to schedule slippage, the original CASS equipment was fielded about 10 years ago, uses 15-year-old technology and, according to Navy ATE program managers, is in need of an upgrade. Accordingly, by 2006, the first production units will have reached the point where wear and obsolete components will drive supporting costs to unacceptable levels and create a need for replacement and modernization. The Navy has begun modernization planning for CASS, including upgrades through fiscal year 2014. Integrating CASS into Navy depots may further delay ATE commonality within the service. For example, a 2001 Navy report, addressing total ATE ownership costs, noted that the depots have not maximized the use of CASS because of the limited availability of capital investment funds. In addition, at one depot we found some reluctance to use CASS. This depot had four CASS stations that had never been used—two were delivered in 1999 and installed in December 2000 and February 2001, while two others delivered in 2000 were still in crates. Depot officials said that they had elected not to put the equipment on-line, as they wanted to avoid paying for overhead and maintenance, especially without the workload to justify their use. They also noted that the development of the test program sets needed to use the CASS has been slow, thereby slowing the fielding of the equipment. The Navy has only recently begun a servicewide planning effort to modernize its depot-level testers and determine how best to integrate CASS into its depot maintenance strategy. Unlike the Navy, the Air Force has not made commonality a priority but has pursued unique ATE solutions for each weapon system. In addition, it has only recently initiated efforts to collect information on ATE in its inventory, including the equipment’s condition and its need for modernization or replacement. Because the Air Force has not made concerted efforts to use one system to service multiple aircraft platforms, it has not taken advantage of efficiencies and potential savings such as those expected by the Navy as a result of CASS. Although the Air Force is developing plans to modernize its ATE, and although its policy is to consider developing common testers, it does not yet have an overall plan to guide its modernization efforts and has made limited progress in this area. Furthermore, it does not have a process in place to ensure that commonality is given adequate consideration in its ATE acquisition and modernization. The Air Force has been primarily upgrading—rather than replacing—aging ATE; leaving ATE management up to individual program managers. In most cases, it relies on contractors to provide support for ATE, leaving it vulnerable to contractors who may decide to stop supporting testers when maintaining them is no longer profitable. In early 2001, the Air Force organized the Warner Robins Air Logistics Center Automatic Test System Division to work with program offices on ATE issues. The Division has recently initiated efforts to establish a database of all contractors that are capable of supporting existing ATE to help identify emerging supportability issues. Although the office is responsible for fostering the adoption and use of common families of testers, it has no final decision-making authority regarding ATE modernizations and no control over funding decisions on these matters. Division officials told us that they work with individual project offices to encourage them to use common ATE, but individual project offices make the final decisions. In our opinion, leaving these ATE decisions to the individual Air Force project offices has led to some questionable and unnecessary expenditures. For example: The Air Force will spend approximately $325 million to replace a tester for the F-15 with one that has been under development for almost 10 years and is already obsolete. The new tester, called the Electronic System Test Set, is not expected to be fielded until 2004. However, this electronic tester already needs an upgrade that will cost more than $24 million. Because the new tester will not be able to perform all the required tests, the Air Force will have to keep the old tester too. The Air Force is spending over $15 million for an interim modernization of its intermediate automatic test equipment for its B-1 aircraft while, at the same time, a new tester is being developed. If the Air Force had taken the necessary steps to replace this obsolete tester in a timely manner, these duplicative costs could likely have been avoided, and overall ATE modernization costs reduced. According to an Air Force official, the program office should have begun the acquisition of a replacement tester several years ago, but funding was not available. The service is now considering acquiring a replacement tester estimated to cost $190 million. The Air Force’s Warner Robins Air Logistics Center Automatic Test System Division is developing a strategic plan that is expected to serve as a management plan for meeting long-term ATE needs. The Division plans to develop a baseline of its current tester capabilities, address supportability and sustainability issues, and determine whether tester failures adversely affect the availability of aircraft weapon systems. In addition, it will evaluate replacement and modernization alternatives, taking into account life-cycle costs and the potential for developing common testers. The plan’s implementation is expected to take years to complete. While most of our work focused on ATE for the current aircraft inventory, we also wanted to see how the services were approaching development of testers for two new aircraft, the Joint Strike Fighter and the F/A-22. We found that very different approaches are being taken in the development of ATE for these two aircraft. The JSF, for example, will have a single tester, made up almost entirely of commercial components, which will test all components on the aircraft. The F/A-22 project office has no assurance that commonality is being considered in its tester development or that DOD’s policy to minimize unique ATE development is being followed. The JSF originated in the early 1990s through the restructuring and integration of several tactical aircraft and technology initiatives already under way. The goal was to use the latest technology in a common family of aircraft to meet the future strike requirements of the services and U.S. allies. The JSF support strategy is built upon a single tester to be used by the Air Force, Navy, and Marine Corps, as well as by foreign partners, to test all avionics and weapon systems on the aircraft. The JSF tester, referred to as the LM-STAR, is made up almost entirely of commercially available components, contributing to readily available spares and less complicated upgrades. It will be used during development and after the aircraft is fielded. Vendors participating in the development of avionics and weapon system components for the aircraft are required to produce these components so that their testing can be done by the LM-STAR. A total of $99 million has been allocated for the purchase and support of 88 of these testers during the development phase. While a final decision has not been made on whether maintenance support for the aircraft will be provided by the contractor or at a military facility, the system project office is taking steps to ensure that this tester can be used regardless of where maintenance is accomplished. By contrast, Air Force F/A-22 program officials told us that they have not made a decision as to what testers will be used to support this new aircraft, which began development in 1991. The project office has not ensured that all components for the F/A-22 can be tested with a single tester. Project officials told us that the F/A-22 is a very complex aircraft and that opportunities to take advantage of common equipment will be limited. Yet, the same contractor that is developing the F/A-22 is also involved in the JSF, which is also very advanced and complex and which uses a common family of testers. While current projections of ATE costs are not available, estimates made early in the F/A-22 development phase exceeded $1.5 billion. In 1993, the House Appropriations Committee recommended that a DOD-wide policy be adopted requiring that the introduction of unique ATE be minimized and that DOD establish an oversight system with sufficient authority, staffing, and funding to ensure compliance. DOD established a policy requiring the services to minimize unique types of testers to reduce redundant investments and lessen long-term costs, leveraging its investments in testers across the entire DOD establishment. In 1994, DOD appointed the Navy as its Executive Agent for ATE to oversee the implementation of this policy. As part of the tasking, the Executive Agent for ATE was directed to establish a process so that programs proposing not to use the DOD-designated standard of ATE families would have to request a waiver. In accordance with the direction provided by DOD, the Executive Agent established a waiver process. According to data provided by the Executive Agent, since its inception, 30 requests for waivers were submitted for their review. Our analysis indicated that 15 of these requests resulted in waivers or concurrence. The remaining requests were never finalized, were returned to the originating office for further action, or were determined not to require waivers. According to Executive Agent officials, the Executive Agent makes recommendations concerning the waiver requests, but it does not have the authority to disapprove them. Executive Agent officials told us however, that they have no assurance that all tester acquisitions and modifications are identified or that all required waivers are requested. As a result, they may not be aware of all ATE modifications or acquisitions or they may not be made aware of such until the process is already under way and it is too late to affect any change. For example, the Air Force did not request a waiver for a $77 million modification to ATE supporting the low altitude navigation and targeting infrared for night (LANTIRN). LANTIRN is a pod system that supports the F-15, F-16, and F-14 aircraft in low-level navigation and lazing targets. In its technical comments on our draft report, however, Air Force officials indicated that owing to the nature of the LANTIRN modification, a DOD waiver was not required. We continue to believe, however, that the Executive Agent should be notified of tester modifications of this magnitude. In addition to having no assurance that all tester acquisitions and modifications are identified, Executive Agent officials told us they do not have the necessary enforcement authority or resources to effectively implement the waiver process even when they know of the planned acquisition or modification. For example, Executive Agent officials held several discussions with F/A-22 program officials, early in the development phase, concerning the use of common testers; however, there was no evidence of the Executive Agent’s involvement in F/A-22 ATE development since November 1994. Executive Agent officials do not know whether common testers are being considered. As DOD’s Executive Agent for ATE, the Navy has achieved some success in encouraging the development of common testers and in dealing with technical issues affecting all services. In September 1998, the Executive Agent for ATE reported that DOD had avoided $284 million in costs by implementing DOD’s policy and cited one example in which the Army and the Navy achieved savings of $80 million by jointly developing an electro-optics test capability. Navy officials also told us that they believe ATE planning for the Joint Strike Fighter, which calls for vendors to use standardized test equipment or equipment having commercially available components, can also be considered an accomplishment. In addition, the Executive Agent established integrated process teams to research technical issues dealing with tester commonality, such as efforts to develop open systems architecture. In this regard, DOD provided funds to the Executive Agent during fiscal years 1995 to 1998 for its research and development efforts. Currently, the Navy is leading a joint service technology project aimed at demonstrating that the most advance technologies can be combined into a single tester. The Executive Agent also implemented a process whereby ATE modernization and acquisitions would be reviewed for compliance with DOD policy, and developed the ATE Selection Process Guide and the ATE Master Plan to aid the services in complying with DOD’s ATE policies. ATE officials, responsible for oversight of ATE, noted that their role is essential; however, its current placement in one service (the Navy) makes it difficult to ensure other services comply with DOD guidance. A report recently prepared by a joint service working group noted continuing problems in the implementation of DOD policy, including ATE obsolescence, delays in modernization efforts, a lack of ATE interoperability among the services, upgrading difficulties, rising support costs, proliferation of equipment that is difficult to support, and systems that are not easily deployed. The services have made limited progress in achieving DOD’s commonality goals for ATE, as established in the early 1990s. The department does not have a joint service forum or body that can oversee the total scope of ATE acquisition and modernization and better promote ATE commonality and the sharing of information and technology across platforms and services. DOD does not have sufficient information concerning the magnitude of the services’ modernization efforts or a departmentwide approach to accomplish ATE modernization in the most cost-effective manner. Without such an approach, the department faces a very expensive and time-consuming ATE modernization effort, with the continued proliferation of unique testers. It will also have no assurance that resources are allocated in the most effective manner to exploit commonality and commercially available technology and products. A single entity within DOD—rather than in one service—may be in the best position to provide overarching oversight and coordination between the services in planning for the modernization of ATE. We believe that high-level management commitment within DOD and all the services will be needed to achieve a cultural change that fosters the development of common ATE. We recommend that the Secretary of Defense reemphasize the policy that common ATE be developed to the maximum extent possible. We also recommend that the Secretary reconsider whether placing its Executive Agent for ATE in the Navy—or any single service—is the most effective way to implement the policy. Wherever the Executive Agent is placed organizationally, we recommend that the Secretary give it authority and resources to include representatives from all of the services, with a scope to include the oversight of ATE acquisition and modifications for all weapon systems; establish a mechanism to ensure that all ATE acquisitions and modernizations are identified in an early enough stage to be able to provide a comprehensive look at commonality and interoperability and to ensure a coordinated effort between service entities; direct the services to draw up modernization plans for its review so it can identify opportunities to maximize commonality and technology sharing between and within the services; and continue efforts to research technical issues dealing with tester commonality such as the development of open system architecture and other joint service applications. The Department of Defense provided written comments on a draft of this report, which are reprinted in their entirety in appendix II. The department also provided technical comments which we have incorporated, as appropriate, into the report. DOD concurred with our recommendations and agreed that it should reemphasize the policy that common automatic test equipment be developed to the maximum extent possible. DOD indicated that it would propose that an ATE acquisition policy statement be included in the next issuance of DOD Instruction 5000.2, “Operation of the Defense Acquisition System,” April 5, 2002. DOD also agreed to reconsider whether the placement of its Executive Agent in the Navy—or any single service—is the most effective way to implement its ATE policy. The department further concurred that an Executive Agent for ATE should have the authority and resources to direct the services to draw up modernization plans for its review to maximize commonality, interoperability, and technology sharing between the services. In this regard, DOD agreed that there should be a mechanism to ensure all automatic test equipment acquisitions and modernizations are identified in an early enough stage in order to have a coordinated effort among service entities. Finally, DOD agreed that the Executive Agent for ATE should include representatives from all services. DOD intends to use its authority recently published in DOD Directive 5100.88, “DOD Executive Agent,” September 3, 2002, to reconsider the placement of the Executive Agent and to provide it with sufficient authority, resources, and mechanisms to carry out its responsibilities. In addition, DOD intends to include the funding for the Executive Agent as part of the Planning, Programming, Budgeting and Execution process and to identify such funding separately so that it is visible within the DOD budget. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to interested congressional committees; the Secretaries of Defense, the Navy, the Air Force, and the Army; the Commandant, U.S. Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to other interested parties on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions about the report, please contact me at (757) 552-8100. Key contributors to this assignment were Ken Knouse, William Meredith, Harry Taylor, Hugh Brady, and Stefano Petrucci. We reviewed and analyzed available reports, briefings, documents, and records and interviewed officials at the Office of the Secretary of Defense and at Air Force and Navy headquarters organizations, Washington, D.C.; the Naval Air Systems Command located at Patuxent River, Maryland; Air Force Material Command and system program offices located at Wright-Patterson Air Force Base, Ohio; Warner Robins Air Logistics Center, Georgia; the North Island Naval Aviation Depot, California; the Navy’s Aviation Intermediate Maintenance Department, Oceana Master Jet Base, Virginia; and the intermediate maintenance department aboard an aircraft carrier based in San Diego, California. The Army was not included in the scope of this study because our focus was primarily on fixed-wing aircraft and because of the Army’s efforts to standardize its automatic test equipment (ATE) around a single family of testers, a situation similar to that of the Navy’s. To identify the problems that Air Force and Navy aviation (including Marine Corps) is facing with regard to ATE, we interviewed personnel responsible for policies and oversight, obtained applicable regulations and other guidance, and analyzed data provided by the services on various testers. We provided a proforma for the Air Force’s and Navy’s use in documenting their inventory of ATE, identifying obsolete testers, and providing estimates of modernization and replacement time frames and cost. The Navy’s data on ATE were provided by the central office that manages common test equipment—PMA-260, within the Naval Air Systems Command, and the Air Force’s Automatic Test System Division at Warner Robins Air Logistics Center. We also discussed obsolescence issues and ATE problems with the managers of shore-based, aircraft carrier, and depot maintenance activities. We reviewed and analyzed our prior reports and ongoing efforts, and reports of other organizations to provide a historical and contextual framework for evaluating ATE policies and issues, for documenting readiness rates of selected aircraft, and documenting the processes put in place by the Department of Defense (DOD) to oversee the services’ efforts to acquire and modernize ATE. To determine how successful DOD and the services have been in addressing the proliferation of unique testers, we held discussions with the responsible offices within each service and DOD, analyzed regulations and guidance, and reviewed studies and other documentation. We focused our work concerning this objective at the Navy office designated as DOD’s Executive Agent for Automatic Test Equipment—PMA-260 within the Naval Air Systems Command—and the Air Force’s Automatic Test System Division at Warner Robins Air Logistics Center. At these offices, which have responsibility for ATE acquisition or sustainment, modernization, and oversight, we held discussions with responsible officials, obtained documentation regarding responsibilities and decisions, and reviewed files for specific ATE acquisition and modernization programs. We also obtained information from individual system program offices, for selected aircraft, located at Wright-Patterson Air Force Base and selected Navy and Air Force depots and intermediate maintenance facilities. Because we found that Air Force testers are generally unique to specific aircraft, we selected the F-15, B-1B, and B-2 for more detailed analysis, as these are considered to be front-line aircraft depended upon heavily by the Air Force to accomplish its mission. We also obtained information on ATE acquisition for two fighter aircraft currently under development: the Joint Strike Fighter and the F/A-22. We performed our review from January 2002 through March 2003 in accordance with generally accepted government auditing standards. Defense Inventory: Better Reporting on Spare Parts Spending Will Enhance Congressional Oversight. GAO-03-18. Washington, D.C.: October 24, 2002. Defense Inventory: Improved Industrial Base Assessments for Army War Reserve Spares Could Save Money. GAO-02-650. Washington, D.C.: July 12, 2002. Defense Inventory: Trends in Services’ Spare Parts Purchased from the Defense Logistics Agency. GAO-02-452. Washington, D.C.: April 30, 2002. Defense Logistics: Opportunities to Improve the Army’s and Navy’s Decision-Making Process for Weapons Systems Support. GAO-02-306. Washington, D.C.: February 28, 2002. Military Aircraft: Services Need Strategies to Reduce Cannibalizations. GAO-02-86. Washington, D.C.: November 21, 2001. Defense Logistics: Actions Needed to Overcome Capability Gaps in the Public Depot System. GAO-02-105. Washington, D.C.: October 12, 2001. Defense Logistics: Air Force Lacks Data to Assess Contractor Logistics Support Approaches. GAO-01-618. Washington, D.C.: September 7, 2001. Defense Inventory: Navy Spare Parts Quality Deficiency Reporting Program Needs Improvement. GAO-01-923. Washington, D.C.: August 16, 2001. Army Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness. GAO-01-772. Washington, D.C.: July 31, 2001. Navy Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness. GAO-01-771. Washington, D.C.: July 31, 2001. Air Force Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness. GAO-01-587. Washington, D.C.: June 27, 2001. Defense Inventory: Information on the Use of Spare Parts Funding Is Lacking. GAO-01-472. Washington, D.C.: June 11, 2001. Defense Inventory: Approach for Deciding Whether to Retain or Dispose of Items Needs Improvement. GAO-01-475. Washington, D.C.: May 25, 2001. Military Aircraft: Cannibalizations Adversely Affect Personnel and Maintenance. GAO-01-93T. Washington, D.C.: May 22, 2001. Defense Inventory: Army War Reserve Spare Parts Requirements Are Uncertain. GAO-01-425. Washington, D.C.: May 10, 2001.
The services have billions of dollars worth of outdated and obsolete automatic test equipment (ATE) used to test components on military aircraft or weapon systems. Department of Defense (DOD) policy advocates the development and acquisition of test equipment that can be used on multiple types of weapon systems and aircraft and used interchangeably between the services. At the request of the Subcommittee's Chairman, GAO examined the problems that the Air Force, Navy, and Marine Corps are facing with this aging equipment and their efforts to comply with DOD policy. DOD and the services face growing concerns regarding obsolete automatic test equipment, given the high costs of modernizing or replacing it and its potential effect on aircraft readiness. The Navy and Air Force, for example, estimate that they will spend billions of dollars to modernize or replace this equipment, much of which was acquired in the 1970s and 1980s. In the meantime, the aging testers are becoming increasingly out of date and more difficult to support. When the testers do not work properly, maintenance can suffer and readiness can be adversely affected. Since 1994, DOD policy has advocated the acquisition of test equipment that can be used on multiple weapon systems and aircraft and can be used interchangeably between the services; progress in this regard has been slow. For example, although the Navy set out in 1991 to replace 25 major tester types with one standard tester by 2000, budget cuts and delays in developing software have resulted in delays in completing the replacement of these obsolete testers until 2008. The Air Force has only recently initiated a test equipment modernization plan. However, little evidence suggests that consideration is being given to the acquisition of equipment that would have common utility for more than one weapon system as DOD policy advocates. For procurement of new weapon systems, the Air Force is giving little consideration to the use of a common tester, while a common tester is planned for use as the primary tester for the Joint Strike Fighter. Although DOD tasked the Navy as its Executive Agent for automatic test equipment in 1994, the agent has made only limited progress in achieving compliance across all the services with DOD policy advocating the development of common systems. While the Executive Agent can point to some successes in individual systems, its officials acknowledged that the organization does not have sufficient authority or resources to fully implement the policy and achieve the maximum commonality possible.
Over the past 7 years, DOD has increasingly used the SRB program to address retention shortfalls. The program’s budget has grown from $308 million in fiscal year 1997 to an estimated $734 million in fiscal year 2003—a 138 percent increase after the effect of inflation was held constant (see fig. 1). The budget is estimated to grow to $803 million in fiscal year 2005, with most of the projected growth resulting from increases in the Air Force SRB program budget. Our 2002 report noted that in fiscal year 2001 the Air Force extended reenlistment bonuses to 80 percent of its specialties. In recent years, Congress has appropriated less money than the services have requested for the SRB program. Based on our work, in fiscal 2003 Congress appropriated $32 million less than DOD requested. Congressional committees proposed further SRB budget reductions during their reviews of DOD’s fiscal year 2004 budget request. The House Appropriations Committee proposed a $44.6-million reduction; the Senate Appropriations Committee, a $22-million reduction; and the Senate Armed Services Committee, a $46-million reduction. The Senate Armed Services Committee additionally noted concerns about proposed SRB program budget increases at a time when overall retention rates are robust and the benefits of military service are increasing overall. DOD appealed these proposed reductions, noting that “the effects of an improving economy and the waning emotional patriotic high of the decisive victory in Operation Iraqi Freedom will combine to increase pressures on both the recruiting and retention programs.” For fiscal year 2004, Congress appropriated $697 million for the SRB program, which was a reduction of $38.6 million from the amount DOD requested. Despite increased use of the SRB program, DOD has cited continued retention problems in specialized occupations such as air traffic controller, linguist, and information technology specialist. A more favorable picture is present with regard to overall retention. All of the services reported that they met overall retention goals for fiscal year 2002 and, with the exception of the Air Force missing its retention goal for second term airmen, expect to meet overall retention goals in fiscal year 2003. Further bolstering these retention expectations are recent survey results showing improvements in servicemembers’ attitudes toward remaining in the military. For example, the 2002 DOD-wide status of forces survey found that the career intent of military personnel had improved between 1999 and 2002, rising from 50 to 58 percent. The survey results showed that retention attitudes were particularly better for junior enlisted (up 11 percent) and junior officers (up 13 percent). In addition, the 2002 Air Force-wide quality of life survey found that 66 percent of enlisted personnel reported they would make the Air Force a career, which is an increase from 58 percent, reported in 1997. According to DOD officials, the effects of more recent events such as extended deployments and other higher operations tempo issues could change servicemembers’ attitudes toward remaining in the military. Congressionally approved reforms in basic pay implemented during the last 3 years were intended in part to address retention problems, particularly with mid-grade enlisted personnel. In the 2002 Quadrennial Review of Military Compensation, DOD attributed the increased use of SRBs in the late 1990s to a growing pay discrepancy between civilians and the mid-career enlisted force. For that period, the review noted an increased use of bonuses for personnel with 10 to 14 years of service. DOD noted that while bonuses are a very important compensation tool, their use is intended for specific purposes and for relatively short periods of time. According to that review, bonuses are appropriate for use within particular skill categories, not as a tool for resolving military and civilian pay differentials across an entire segment of the force. The report noted that widespread pay differentials should be remedied through pay table restructuring. Pay table restructuring began in fiscal year 2001, and additional military pay adjustments have been approved in subsequent budgets. DOD’s May 2003 report did not thoroughly address four of the five congressional concerns about effective and efficient management of the SRB program. First, the report indirectly addressed SRB program effectiveness and efficiency by discussing bonuses as a general military retention tool instead of the effectiveness and efficiency of the program in targeting bonuses to improve retention in selected critical occupations. Second, DOD did not permit us to review the draft guidance, but—based on DOD’s comments on our 2002 SRB report, excerpts of draft criteria contained in DOD’s mandated report, and our discussions with DOD officials—the replacement guidance could expand the SRB program by giving the services more flexibility in designating occupations as critical and either eliminate or weaken the requirement for annual SRB program reviews. Third, OSD did not outline steps to match program execution to appropriated funding as the mandate required; instead, OSD reiterated the need for program-execution flexibility. Fourth, OSD’s evaluation of the services’ administration of their SRBs program was limited, relied largely on service-provided descriptions, and did not use consistent procedures and metrics. Finally, as required by the fifth concern in the mandate, DOD identified the most salient advantages and disadvantages resulting from paying SRBs as lump sums. DOD’s report did not directly discuss how effectively and efficiently each service is currently using the SRB program to address retention problems in critical occupations. Although the mandate noted, “a reassessment of the program is warranted to ensure it is being managed efficiently,” DOD’s response to concern one did not provide sufficient detail to document the effective and efficient use of the program in awarding SRBs. In response to one of the other four congressional concerns, DOD stated that the “intent of retention bonuses is to influence personnel inventories in specific situations in which less costly methods have proven inadequate or impractical.” The report did not, however, document what methods had been used previously or the cost-effectiveness of those methods in achieving desired retention levels. Also absent from the report was a discussion of how key factors influence the current use of SRBs. Examples of key factors include the effects of changes in the basic pay, overall retention rates, and civilian unemployment. For example: Despite increasing basic pay to address the discrepancy between military and civilian pay noted in the 2002 Quadrennial Review of Military Compensation, the budgets for the SRB program are projected to grow to $803 million in fiscal year 2005. In comments received on our preliminary observations briefing, DOD officials noted that our use of constant 2004 dollars in our budget trend analysis did not fully account for the effects of the basic pay changes that exceeded the inflation level and thus increased the size of individual bonuses. At the same time, future SRB program budgets do not show decreases that might be expected as these pay table changes address overall military-civilian pay discrepancies and problems identified within various pay grades. The report did not address the extent to which recent higher levels of overall retention offer opportunities for reducing the number of occupations eligible for SRBs or the bonus amounts awarded for reenlistment. DOD officials have noted that all of the services met or exceeded their aggregate retention goals in fiscal year 2002 and that strong overall retention is expected to continue. However, they cited retention shortfalls in some occupational specialties as areas of concern. Although a generally positive aggregate retention climate might present DOD with opportunities to curtail use of its SRB program, the report did not discuss under what conditions reductions in the program might or might not be appropriate at this time. Despite noting a relationship between civilian unemployment rates and military retention, DOD’s report did not indicate whether civilian unemployment—which is at a 9-year high—might result in the need for fewer SRBs being offered and possibly at lower bonus levels. One study cited in DOD’s report noted that there is a relationship between higher unemployment rates and improved overall military retention. In part of its answer to concern three, DOD noted that changes in the economy and labor market drive changes in actual reenlistment rates. Just as periods of relatively lower civilian unemployment might suggest the need for greater use of the SRB program, periods of relatively higher unemployment might conversely suggest less need for SRBs. Despite civilian unemployment being at its highest rate in several years, the SRB program budget is projected to increase in fiscal year 2005. Instead of directly addressing program effectiveness and efficiency, the 2003 report discussed the general benefits of using bonuses to retain military personnel. DOD’s report cited numerous studies that demonstrated or postulated this effect. However, findings from some studies may not be readily generalized to the way that the SRB program is currently managed or to the economic conditions that currently exist. More specifically, some studies used outdated retention data obtained in the mid-1970s or were performed in a very different retention environment (e.g., the increase in force size during the 1980s and the large draw-down of military forces in the 1990s). Even given our concerns about some of the findings, we believe DOD presented sufficient support for its conclusion that bonuses can be effective in promoting retention. A largely unaddressed, but more pertinent issue is how effectively and efficiently DOD applied this tool to improve retention in critical occupations under recent and current economic conditions. DOD did not permit us to review the draft guidance that will replace the current DOD directive and the DOD instruction canceled in 1996. Our findings for this concern are based on DOD’s comments on our 2002 SRB report, excerpts of draft criteria contained in DOD’s mandated report, and our discussions with DOD officials. Changes to the guidance could lower the threshold required for designating occupations as critical and may eliminate or weaken the requirement for formal annual reviews of the SRB program. DOD’s planned changes to the replacement guidance could provide the services with greater flexibility for designating a specialty as critical but could weaken the controls for targeting the specialties receiving SRBs by lowering the threshold required for making such a designation. The canceled 1996 instruction required the services to consider five criteria before designating a specialty critical and making it eligible for SRBs, but DOD’s 2003 report stated that the revised program instruction would require occupations to meet a lower threshold—meeting “at least” one of five criteria. For the period since 1996 when the instruction was canceled, our 2002 report found that, in some cases, the services had already been using only one of the five criteria to designate occupations for inclusion in the program. This allowed the services to define broadly what constituted a critical occupation and included more occupations than would have likely qualified if all five criteria had been considered. DOD’s planned changes could also eliminate or weaken the requirement for formal annual reviews of the SRB program and thereby weaken the ability of Congress and DOD to monitor the program and ensure that it targets only critical specialties. To implement the SRB program, DOD Directive 1304.21 assigns specific responsibilities for administering the program to the OSD and to the service Secretaries. According to this directive, the Assistant Secretary of Defense for Force Management Policy, under the Under Secretary of Defense for Personnel and Readiness, is responsible for annually reviewing and evaluating the services’ enlisted personnel bonus programs in conjunction with the annual budget cycle. These reviews are to include an assessment of the criteria used for designating critical military specialties. As a result of these reviews, the OSD is to make the revisions needed to attain specific policy objectives. Our 2002 report found that DOD had not conducted any of the required annual program reviews since 1991. In its response to our 2002 SRB report, DOD stated that it plans to eliminate those requirements from the replacement guidance. More recently, a DOD official stated that the new guidance will require periodic reviews, but neither the frequency nor the details of how these reviews would be conducted was explained. In its report to Congress, DOD maintained that much of the SRB program oversight takes place during ongoing internal service program budget reviews. In contrast, we concluded in our 2002 report that those program budget reviews were limited in scope and did not provide the detailed evaluation needed to ensure the program was being implemented as intended. A more in-depth discussion of the current limited oversight is provided when we discuss DOD’s response to the fourth concern. In contrast to the previously mentioned changes, DOD’s report noted some steps that we believe could strengthen controls on the SRB program. According to the report, the new SRB program instruction will (1) require the services to establish parameters to define “critical shortages” and (2) base those requirements on factors such as the potential impact of a shortage on mission accomplishment. In addition, DOD has recently established a working group that has been tasked with developing a “common understanding and definition of critical skills.” Previously, we found that DOD had not clearly defined the criteria the services were to use in designating critical occupations since the SRB program instruction was canceled in 1996. Contrary to the mandate, DOD’s 2003 report did not outline steps that it will take to match program execution with appropriated funding. Instead, DOD stated that the services need execution flexibility and have operated consistent with the law and within the overall Military Personnel appropriation. Our trend analysis in current year dollars showed that the services spent a combined total of $259 million more than Congress appropriated for the SRB program in fiscal years 1999-2002. DOD’s use of this flexibility has resulted in the services overspending their SRB budgets by as much as $111 million in a single year—fiscal year 2001. More recently, two of the services stayed within their appropriated budgets. In fiscal year 2002, the Air Force and Marine Corps spent, respectively, $26 million and $4 million less than their fiscal year 2002 SRB appropriation. However, the Army and Navy exceeded their appropriated SRB budgets by $38 million and $21 million, respectively. DOD noted that the services can reallocate funds within the Military Personnel appropriation without seeking congressional authority. In the report, DOD did not agree with the congressional concern that program expenditures needed to match funding levels appropriated specifically for the SRB program. Rather, DOD maintained that monies were available from other parts of the Military Personnel appropriation if a service needed additional SRB funding in a fiscal year. DOD’s response noted that budget submission timelines require reenlistment forecasts up to 2 years prior to execution and that intervening changes in the economy and labor market can add uncertainty and drive changes in actual reenlistment rates. Using the services in-year, or current estimates—created during the year of program execution—we found that the services had exceeded their fiscal year 1999-2002 estimates for the number of expected SRB reenlistments by a combined total of 32,466 personnel. Furthermore, our current trend analysis on their budget justifications showed that for the Army and Navy, reallocation or reprogramming of funds had become a reoccurring pattern of activity. In our 2002 report, we concluded that better OSD program oversight and management would have required the services to justify their need to exceed appropriations during fiscal years 1997-2001. DOD’s limited evaluation of the services’ SRB programs relied primarily on program descriptions provided by the services. The report presented different issues for each service and used inconsistent procedures and metrics to reach conclusions about each service’s program administration effectiveness and efficiency. Absent was a discussion of key performance indicators, the means used to verify and validate the measured values, and other characteristics such as those GAO identified in its report assessing agency annual performance plans. The absence of a consistent, explicit methodology made it difficult to determine (1) best practices that might be applied from one service to another and (2) other insights that could result in each service more effectively and efficiently administering its SRB program. Although OSD assembled a multi-service panel to discuss the evaluation, DOD’s response consisted largely of program descriptions that the services supplied. Each service made statements about the effectiveness of its program but provided insufficient documentation to support those statements. OSD conducted its last comprehensive review of the SRB program in 1991. As noted earlier in our assessment of DOD’s response to the second concern, DOD stated that it intends to eliminate the requirement to perform detailed annual program reviews when its replacement program directive is issued. In introductory comments to the 2003 report, DOD stated that the SRB program is evaluated annually within the context of three Planning, Programming, and Budgeting System activities. In our 2002 report, we found that those reviews, conducted by the DOD Comptroller and the Office of Management and Budget (OMB), and the testimony provided to Congress were limited. When the services prepare budget submissions for the SRB program, they discuss the small sample of occupations included in their justification books. As we noted in our 2002 report, the DOD Comptroller stated that the budget submissions are not detailed programmatic evaluations. DOD’s 2003 report also cited OMB reviews as part of an evaluation of the programs. During the preparation of our 2002 report, OMB officials told us that their reviews were limited and did not constitute a detailed assessment of the services’ programs. DOD’s 2003 report stated that the services’ out-year budgets were carefully reviewed during congressional testimony. It is our view that congressional testimony does not represent a detailed programmatic review of a program this complex. For example, in March 11, 2003, DOD’s testimony before the Military Personnel Subcommittee of the Senate Armed Services Committee included very limited statements about the SRB program. DOD’s report listed some positive steps that the services have proposed to administer the SRB program more effectively and efficiently. For example, the Navy and Army are validating and improving the models used to manage their SRB programs, and the Air Force has created a new bonus review board to keep its leaders apprised of how the SRB program is functioning. At the time of our review, the services were just starting to implement these steps to improve their programs, and there was no data to determine how effective and efficient these efforts are. DOD identified the most salient advantages and disadvantages resulting from implementing a lump sum payment method for paying retention bonuses. We generally concur with DOD’s observations about the positive and negative aspects of using lump sum bonuses. DOD’s report cited a 1985 GAO study that found lump sum payments had three main advantages: more cost-effective, better visibility to Congress, and more adaptable to budget cuts than paying bonuses incrementally. The 2003 DOD report cited another important consideration in awarding bonuses in lump sum payments. Because enlisted personnel prefer “up-front” payments and are willing to receive less money initially than more money offered in the future, we believe that the federal government could reenlist more personnel for the same amount of money if bonuses were paid in a lump sum. DOD cited several disadvantages to using a lump sum payment option. For example, there are significant up-front costs associated with paying both lump sum SRB payments in the implementation year and completing the anniversary payments for SRBs awarded previously. The first year of change would require the largest budget increase, and each subsequent transition year would become less costly. The implementation of a lump sum SRB program could become cost neutral over the long term if bonuses paid in a lump sum eliminated the need for equal amounts of anniversary payments in succeeding years. It could even save money if sufficient reenlistees were attracted with less money because up-front compensation—even if less—is more attractive than compensation promised in the future. DOD’s report addressed other potential disadvantages of using a lump sum payment method. These include the possibility that a recipient will fail to stay in the military for the full reenlistment period after receiving a bonus and the problem associated with recouping all or part of bonus amounts from personnel who do not complete their obligated term of service. Despite these disadvantages, our 1985 report stated our support for the use of lump sum retention bonuses. The Marine Corps began using the lump sum payment option for its SRB program in fiscal year 2001 and is the only service currently using this payment method. In February 2004, the Marine Corps expects to have preliminary results from an evaluation of its use of lump sum payments. Although not required by the mandate to do so, DOD and the services could have made the response to concern five more informative for Congress by identifying alternative strategies for implementing the lump sum option and estimating the costs of each strategy for each service. For example, one strategy might be to phase in the lump sum payment option. Phasing in lump sum payments could provide DOD with increased program administration flexibility and decreased budgetary problems caused by switching from installment payments in a single year. Overall, our analysis of DOD’s May 2003 congressionally mandated report on the SRB program showed that DOD’s report did not provide sufficient information to enable Congress to determine whether the program is being managed effectively and efficiently. With one exception, DOD’s report did not thoroughly address congressional concerns about the effective and efficient management of the SRB program. Of DOD’s responses to the five congressional concerns, three were incomplete or nonresponsive—those regarding program effectiveness and efficiency in correcting retention shortfalls in critical occupations, DOD actions to match program execution with appropriations, and DOD’s evaluation of the services’ program administration. A fourth response—regarding replacement program guidance—did not provide information essential for us to make an independent determination as to the response’s adequacy. DOD directly and fully addressed one of mandated concerns—the advantages and disadvantages of lump sum bonus payments. Although the SRB program is expected to grow to over $800 million in fiscal year 2005, the report did not address factors that may have reduced the services’ retention concerns and could reduce SRB program cost. Underlying many of these shortcomings is a lack of empirically based information caused by DOD’s limited reviews of the SRB and inconsistent use of evaluation procedures and metrics. DOD’s possible elimination of the requirement for a detailed annual review and continued reliance on service-specific procedures and metrics could further weaken Congress’s ability to monitor the SRB program. To assist Congress in its efforts to monitor the management of the SRB program and to ensure that DOD is effectively and efficiently targeting retention bonuses to critical occupations, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to (1) retain the requirement for an annual review of the SRB program and (2) develop a consistent set of methodologically sound procedures and metrics for reviewing the effectiveness and efficiency of all aspects of each service’s SRB program administration. In written comments on a draft of this report, DOD concurred with our recommendations. DOD further stated that, with regard to our recommendation to develop review procedures and metrics, it would (1) conduct research to develop meaningful metrics for reviewing the effectiveness and efficiency of all aspects of each service’s administration of the SRB program and (2) implement those metrics so that they are consistent with DOD’s Human Resource Strategy Plan. DOD’s comments are reprinted in their entirety in appendix II. We are sending copies of this report to the Secretary of Defense. We will also make copies available to appropriate congressional committees and to other interested parties on request. In addition, the report will be available at no charge at the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please call me at (202) 512-5559. Key staff members contributing to this report were Jack E. Edwards, Kurt A. Burgeson, Nancy L. Benco, and M. Jane Hunt. We reviewed the Department of Defense’s (DOD) May 2003 congressionally mandated report and documents used in the preparation of that report. That information was supplemented with prior Selective Reenlistment Bonus (SRB) program guidance, budget request documentation, and other information gathered during our 2002 review of the SRB program. To assess the adequacy and accuracy of the information contained in DOD’s report, we obtained and reviewed documentation used by the Office of the Secretary of Defense (OSD) to support its responses. For example, we reviewed the eight studies cited in DOD’s response to concern one in the mandate. In addition, we reviewed information provided by each of the services, as well as past GAO and DOD reports on the SRB program. We compared findings from these past reports to DOD’s mandated responses to assess the validity of what was presented. We updated the program budget analysis from our 2002 review using budget data contained in DOD’s Military Personnel budget justification books prepared for Congress. We sought to review updated SRB program guidance, but DOD indicated that these pre-decisional documents would not be released until the final versions had been approved. We met with DOD officials to update information obtained during our 2002 review of the SRB program. Interviews were primarily conducted with officials in the Office of the Under Secretary of Defense for Personnel and Readiness because these officials were the primary authors of DOD’s report. We also met with personnel responsible for administering the services’ SRB programs. We obtained updated retention data contained in prepared statements used by DOD during congressional hearings. We also reviewed the results of DOD’s 2002 status of forces survey and the Air Force’s 2002 quality of life survey. We conducted our review from June through September 2003 in accordance with generally accepted government auditing standards.
The Department of Defense (DOD) uses the Selective Reenlistment Bonus (SRB) program to reenlist military personnel in critical specialties. In fiscal years 1997-2003, the program budget rose 138 percent, from $308 million to $734 million. In fiscal year 2003, the House Appropriations Committee directed the Secretary of Defense to reassess program efficiency and report on five concerns: (1) how effective the program is in correcting retention shortfalls in critical occupations, (2) how replacement guidance will ensure targeting critical specialties that impact readiness, (3) how DOD will match program execution with appropriated funding, (4) how well the services' processes for administering the program work, and (5) advantages and disadvantages of paying bonuses in lump sum payments. The committee also directed GAO to review and assess DOD's report. Despite congressional concerns about the SRB program, DOD's May 2003 report stated that the program is managed carefully, bonuses are offered sparingly, and the services need flexibility in administering the program. However, DOD's responses did not thoroughly address four of the five SRB program concerns contained in the mandate. As a result, Congress does not have sufficient information to determine if the program is being managed effectively or efficiently. DOD has not issued replacement program guidance and did not allow us to review the guidance that has been drafted. DOD's report focused primarily on criteria for designating occupations as critical, but the report did not address an important change--the potential elimination of the requirement for conducting annual program reviews. In response to our 2002 report, DOD stated that this requirement would be eliminated from future program guidance. DOD recently told us that the new guidance will require periodic reviews, but neither the frequency nor the details of how these reviews would be conducted was explained. DOD conducted a limited evaluation to address the congressional concern about how well the services are administering their programs. The response consisted largely of program descriptions provided by the services. Among other things, DOD did not use a consistent set of procedures and metrics to evaluate each of the services' programs. Consequently, it is difficult to identify best practices, or to gain other insights into ways in which the effectiveness and efficiency of the services' programs could be improved. DOD thoroughly addressed the congressional concern pertaining to the advantages and disadvantages of paying SRBs as lump sums.
ERISA established the broad fiduciary requirements related to private pension plans and was designed to protect the pension and welfare benefit rights of workers and their beneficiaries. The act requires a plan fiduciary to act “…solely in the interest of plan participants and beneficiaries and for the exclusive purpose of providing benefits” to them and to act “…with the care, skill, prudence, and diligence under the circumstances then prevailing that a prudent man acting in a like capacity and familiar with such matters would use.” Failure to act in accordance with these requirements might constitute a breach of fiduciary duty. Breaches of the fiduciary duty to act solely in the interest of plan participants and beneficiaries with respect to proxy voting could arise when a fiduciary has a conflict of interest. Conflicts of interest occur in a variety of ways in proxy voting. Conflicts occur when a plan fiduciary or proxy voter has either business or personal interests that compete with the interests of participants. When conflicts are not appropriately managed, they could lead to a breach of fiduciary responsibility or, at least, may raise concern that a breach has occurred. For example, an SEC investigation showed that DeIB division had an undisclosed business relationship with HP, which may have influenced the proxy voter’s vote cast by DeAM about a merger between HP and Compaq Computer Corporation. ERISA’s fiduciary requirements apply to plan sponsors, trustees, managers, and others who act as fiduciaries. These requirements do not explicitly address proxy voting; however, DOL—having responsibility for the investigation and enforcement of violations of ERISA, which includes provisions related to fiduciary responsibility—has stated that the fiduciary act of managing plan assets that are shares of corporate stock generally includes the voting of proxies pertaining to those shares of stock. The provisions of ERISA were enacted to address public concerns that funds of private employee benefit plans were being mismanaged and abused. DOL can take several actions to correct fiduciary violations it identifies. These include acceptance of voluntary fiduciary agreements to implement corrective actions, initiation of civil litigation in federal district court, and referral of certain violations to other enforcement agencies. On the matter of proxy voting, DOL has issued several letters and bulletins discussing the duties of pension plan fiduciaries. For example, the “Avon Letter,” released in 1988, stated that the voting of a proxy is a fiduciary duty and that the responsibility for voting falls on the plan’s trustee unless otherwise delegated. Through its “ISS letter,” issued in 1990, among other things, DOL stated that with respect to monitoring activities, that the plan fiduciary, in order to carry out his or her fiduciary responsibilities, must be able to periodically review voting procedures and actions taken in individual situations so that a determination can be made whether the investment manager is fulfilling its fiduciary responsibility. Furthermore, DOL issued Interpretive Bulletin (IB) 94-2 in 1994, which clarified the guidance in the previous two letters and also stressed the importance of statements of investment policy, including voting guidelines. While DOL said that maintenance of such statements of investment policy are consistent with ERISA, DOL officials said that they do not have the statutory authority to require plans to maintain such statements. SEC, under the Investment Company Act of 1940, regulates companies, including mutual funds, that engage primarily in certain operations, such as investing, reinvesting, and trading in securities, and whose own securities are offered to the investing public. A primary mission of SEC is to protect investors and maintain the integrity of the securities markets through disclosure and enforcement. Employees in participant-directed pension plans might be given the choice of investing in securities, including employer securities, as well as a variety of mutual funds. Because plan participants may have such investment options, securities law protections applicable to investors are relevant to plan participants. In addition, some pension plans use investment managers to oversee plan assets and these managers may be subject to securities laws. Congress previously studied the issue of DOL’s enforcement and proxy voting. In the 1980s, reports emerged that fiduciaries were not voting their proxies or that conflicts of interest may have influenced the decisions of some plan fiduciaries. The Congress consequently became concerned about whether fiduciaries were fulfilling their responsibility to protect the interests of pension plan participants and beneficiaries. Because ERISA does not specifically lay out what the fiduciary responsibility is regarding proxy voting, many fiduciaries were thought to be unclear about their responsibility to vote proxies and maintain voting guidelines. This was cited as one of the major factors that led the Subcommittee on Oversight of Government Management, Senate Committee on Governmental Affairs, to conduct an investigation of and hold hearings in 1986 on DOL’s enforcement of ERISA. Among other things, the Subcommittee concluded that disclosure of proxy votes would facilitate the DOL’s enforcement efforts by providing the agency and other interested parties with much needed information. DOL officials believe that the agency does not have the statutory authority to require plan fiduciaries to publicly disclose their proxy votes and guidelines. Some experts we interviewed said that conflicts of interest exist in the proxy voting system and limited disclosure makes proxy voting vulnerable to conflicts of interest. Conflicts of interest occur because of the various business relationships that may influence a plan fiduciary’s or proxy voter’s vote. For example, when a company provides investment advisory services for a company-sponsored pension plan and also provides investment banking services to the company sponsoring that pension plan. Although conflicts will exist, limited disclosure makes proxy voting vulnerable to them. Because of this lack of transparency, participants do not have the information needed to raise questions regarding whether proxy votes were cast solely in their interest. Business associations between a proxy voter and any entity that may influence their vote presents a conflict of interest. Some experts we interviewed explained that these associations may form whether proxies are internally or externally managed because company management has direct access to the proxy voter who is either an employee, in the case of internally voted proxies, or is a service provider, in the case of externally voted proxies. When a portion of a company’s pension plan assets are invested in its own company stock, the proxy voter may be particularly vulnerable to conflicts of interest because management has the ability to directly influence its voting decisions and, since company stock held in the company’s own pension plan is typically managed internally, the proxy voter may at times be more concerned about their own interests. While ERISA states that fiduciaries must act solely in the interest of pension plan participants, there is no requirement that an independent fiduciary be appointed to provide additional protections for participants with company stock in their pension plans. Several experts explained that conflicts of interest that occur in this type of arrangement are considerably problematic. For example, one expert said that since proxy voting and other decisions relating to company stock are much more likely to be handled in-house, votes may be cast in accordance with the wishes of the company’s senior management. In such cases, the company’s management may not consider the best interest of plan participants and beneficiaries independently from management’s opinion of what is best for the company. The Enron case provides an example of how management’s own concerns may come before that of participants and beneficiaries. In addition, some experts said that when proxies are internally managed, the proxy vote may be influenced by the fiduciary’s own personal concerns, particularly in instances when casting a vote solely in the interests of plan participants and beneficiaries means voting against company management. Specifically, if the plan fiduciary is a lawyer, investment analyst, or a member of the management team for the company, their proxy vote on management proposals such as a merger and acquisition or for individuals they have chosen to serve on the board of directors could be influenced by concerns about their personal standing, or job security, in the company. A few experts said that a fiduciary in this situation is not likely to vote against a management proposal such as an executive compensation package because of their own personal concerns. Additionally, DOL officials said that conflicts for an internal fiduciary could arise when the company is experiencing problems, which, if publicly known, would cause stock value to decline. In order to protect participants, fiduciary duty might require the fiduciary to publicly disclose the information to participants and other shareholders and sell shares of the company stock. Insider trading rules would, however, prevent the fiduciary from taking action on nonpublic information. However, making this information public could cause a rapid decline in share value as investors sell off their shares of stock, thereby, potentially harming the company and the fiduciary’s own personal standing in the firm. Because company management could influence the fiduciary responsible for voting the proxies related to the company’s own stock, management may have a significant amount of influence over the outcome of a proxy contest. In order to assess the influence management could have in a proxy contest, we conducted an analysis of Fortune 500 companies. (See appendix I for further information on our methodology.) In our analysis, we compared the number of voting shares of company stock held in a company’s pension plans to the total voting shares held in the market. About 272 of the Fortune 500 companies that reportedly had their own company stock in their pension plans and in separate accounts, such as master trust agreements held over $210 billion in employer securities in plan year 2001. Of those companies, 27 percent held at least 5 percent or more of company stock in their company’s pension and benefit plans, while another 26 percent held between 2 and 5 percent. None of the Fortune 500 firms we analyzed held more than 21 percent of the total voting power of their company’s stock in their pension and welfare benefit plans, while 47 percent held less than 2 percent of company stock in their company’s pension and benefit plans. While the results showed that the pension and welfare benefit plans of the Fortune 500 companies we analyzed were not holding large percentages of the total voting power of a company’s shares, these findings may still be significant. For example, in a contentious proxy contest such as a merger and acquisition where 51 percent of outstanding shares is needed to complete the merger, a company whose pension assets comprise just 2 percent of the total stock issued by a company might act as the deciding vote if the proxy contest is close. In this case, how the plan fiduciary or proxy voter casts its vote could make the difference between 49 percent and 51 percent—that is, the difference between the merger being approved or rejected. Some of the largest and most influential pension plans typically hold no more than 1 to 2 percent of any one company’s shares in their plan’s investment portfolios. As such, a Fortune 500 company whose pension plans holds more than 1 or 2 percent of its own company stock could give them an advantage in a proxy contest. When the fiduciary is not an employee of the plan sponsor—that is, he or she is external to the company—-experts explained that a variety of different types of conflicts might also arise because of business associations. For example, when the proxy voter is an investment manager that is part of a larger corporation that provides a variety of services, experts said that business relationships between the company’s other branches and the plan sponsor might influence the investment manager’s voting decisions. These relationships may influence the proxy voter to vote with the plan sponsor’s management, particularly if the proxy voter wishes to maintain business relationships with the plan sponsor or create an opportunity for future business relationships. For instance, some experts we interviewed contend that DeAM division—the proxy voter in this case—was influenced by a business relationship between DeIB division and their mutual client, HP. SEC records reveal that DeAM reversed its vote to vote in favor of HP’s merger after the investment banking division set up a meeting between the proxy voter and HP management. SEC found that, unbeknownst to DeAM’s advisory clients, DeIB was working for HP on the merger and had intervened in DeAM’s proxy process on behalf of HP. This created a material conflict of interest for DeAM, which has a fiduciary duty to act solely in the interests of its advisory clients. The SEC action found that DeAM violated this duty by voting the proxies on the HP stock owned by its advisory clients without first disclosing the conflict. While some experts we interviewed said that they believe most plan fiduciaries vote solely in the interest of participants and beneficiaries, others said that some fiduciaries might prioritize other interests when casting their votes. For example, a few experts said that fiduciaries are taking their proxy voting responsibility seriously and voting appropriately. Other experts we interviewed said that the proxy voting decisions of some external asset managers are often influenced by short-term quarterly returns on assets rather than on voting patterns that support long-term goals that benefit shareholders and participants. Some experts we interviewed also said that some external asset managers believe that they are retained and compensated because of superior investment performance and not because of how they vote proxies. Last, some experts said that there are only downsides to devoting resources to proxy voting. Experts we interviewed said that the limited disclosure might create inappropriate incentives and result in inadequate accountability, which may make proxy voting especially vulnerable to conflicts of interest. Proxy votes, in some cases, may not be monitored by the plan fiduciary and are not routinely disclosed to the public, two actions that could help ensure that fiduciaries cast votes solely in the interest of pension participants. Limited disclosure and lack of adequate monitoring of proxy voting practices by plans hinders accountability for how votes are cast. Consistent with current DOL requirements, votes are disclosed to the appropriate plan fiduciaries. Fiduciaries are not required to publicly disclose proxy voting guidelines and votes, though the plan would be required to make any written proxy voting guidelines available to participants upon request. Hence, only plans have easy access to the information that allows them to monitor how proxy voters are voting. However, not all plans have the resources to devote to such monitoring; therefore, the attention given to the proxy voting responsibility can vary greatly by plan. Some large plans devote a significant amount of expertise and resources to proxy voting while other plans may not. Furthermore, a few experts said that in many cases where the proxy voting responsibility is delegated externally, the plan provides limited to no review of how the proxies were voted. Experts we interviewed said that limited disclosure might provide incentives for fiduciaries to cast their votes according to their own interests. These experts also said that publicly disclosing proxy votes could help discourage voting that is inconsistent with participants’ interests. For example, a few experts believed that the economic incentives for fiduciaries to vote with management could be significant enough, and the potential for penalties as a fiduciary weak enough, to make voting with management hard to resist. Several experts explained that since breaches of fiduciary duty are very difficult to uncover, limited transparency prevents participants and others from raising questions regarding whether votes were made solely in the interest of participants. They also contend that increased transparency provided by public disclosure may provide participants, regulators, and others with more comprehensive information needed to hold fiduciaries and corporations accountable for their actions. In this regard, SEC concluded that shedding light on mutual fund proxy voting could illuminate potential conflicts of interest and discourage voting that is inconsistent with fund shareholders’ best interests. SEC’s new disclosure rules for mutual funds and investment advisers may provide a limited benefit to some pension plan participants, while the new rule for investment advisers may also benefit pension plans whose proxies are voted externally. In 2003, SEC issued a final rule requiring mutual funds to publicly disclose their proxy votes on an annual basis and to adopt and disclose proxy voting policies and procedures to shareholders. However, this rule may provide some benefit for pension plan participants in defined contribution plans. Specifically, pension plan participants who invest their defined contribution dollars in mutual funds might find proxy voting results cast by investment managers of their funds on the web site of the mutual fund provider. On the other hand, defined benefit plan participants may receive little benefit from this rule if defined benefit plans invest few assets in mutual funds. Furthermore, SEC’s new disclosure rule for investment advisers requires investment advisers to inform their clients how they can obtain information on how the clients’ securities were voted. However, this rule may provide little benefit to plan participants in defined contribution and defined benefit plans since this ruling requires disclosure to the plan as the client and not to plan participants. SEC’s new disclosure rule for investment advisers may also provide protections beyond those provided by ERISA for private pension plans whose proxies are voted externally. SEC’s new disclosure rule for investment advisers may provide requirements that are either not specifically stated or covered in DOL interpretations of ERISA. For example, SEC requires, in part, that investment advisers exercising proxy voting authority over client securities adopt and implement proxy voting policies and procedures for voting clients’ proxies. ERISA, on the other hand, does not require fiduciaries to maintain statements of investment policy, which includes statements of proxy voting policy. Also, SEC requires that voting policies and procedures must describe how the adviser addresses material conflicts between its interests and those of its clients with respect to proxy voting, while ERISA does not. SEC’s investment adviser rule may provide no benefit to plans that retain voting responsibility because it covers only investment advisers that exercise proxy voting authority over client securities. Certain changes in the retirement savings environment are making the need for enhanced transparency more important. For example, the shift from defined benefit plans to defined contribution plans increases the need for disclosure to plan participants. Because under a defined contribution plan participants bear the investment risk, as with shareholders, participants need information to be more active in protecting their retirement assets. SEC reported that the proposal generated significant comment and public interest. Of the approximately 8,000 comment letters, the overwhelming majority supported the proposals and urged SEC to adopt the proposed amendments. Many commenters, including individual investors, fund groups that currently provide proxy-voting information to their shareholders, labor unions, and pension and retirement plan trustees, supported the proposals. Furthermore, one expert said that pension plans should be required to disclose votes and guidelines to participants because participants cannot switch plans the way shareholders can switch their money from one investment company to another. This expert further said that having policies such as these in place makes ERISA stronger especially given the impact that having their money tied up in a retirement portfolio could potentially have on a participant’s retirement assets. Additionally, the expert said that the differences between disclosures provided to shareholders and pension plan participants should be eliminated. To manage conflicts, some plan fiduciaries have taken special actions, some of which are similar to SEC requirements for mutual funds. One such action is the maintenance by fiduciaries of detailed proxy voting guidelines that give proxy voters clear direction, reducing ambiguity and vulnerabilities related to conflicts that may influence the voter. Additionally, some fiduciaries include in their guidelines information on what the plan does when a conflict of interest exists on a proxy vote; they also publicly disclose their guidelines. Some plans also disclose a record of all their votes cast to participants and the public. Some pension plans also put additional procedures and structural protections in place to help manage conflicts. To help manage conflicts, some fiduciaries use detailed proxy voting guidelines that they make public. However, such guidelines are not required by ERISA, nor does DOL give guidance to fiduciaries as to the level of detail and specificity that guidelines should contain. Hence, some plan guidelines vary widely in their level of detail and specificity and some provide only minimal guidance. For example, some plan officials we interviewed said that their guidelines instruct proxy voters to always vote in the best economic interest of participants, while other experts said that some guidelines only instruct proxy voters to vote with management but offer no guidance beyond this broad statement. Other plans, on the other hand, create detailed, up-to-date guidelines. Some plans that we reviewed, for example, maintain guideline documents that direct proxy voters which way to vote, or factors to consider in deciding which way to vote, on a wide range of routine and non-routine proxy issues. The issues include, but are not limited to, board of director elections, auditor selections, executive compensation, reincorporation, capital issues (such as stock issuance), environmental and social concerns, and mergers and acquisitions. In addition, some plans, according to plan officials we spoke with, review their guidelines on a regular basis, and update them if needed. This allows the guidelines to reflect new issues in corporate governance. For example, in 2002, one plan updated its guidelines twice to reflect new corporate governance issues arising from the Sarbanes-Oxley Act. Detailed guidelines reduce ambiguity in the proxy voting process by providing direction to help fiduciaries determine how to vote. For example, detailed guidelines may instruct a voter how to analyze an executive compensation vote based on a number of factors, so that the vote is made in what the fiduciary believes is solely in the interest of participants. As a result, proxy voters have clear direction on how to vote on a specific voting issue. For example, one plan official said that because their guidelines are clear, there is no confusion about how to vote on any proxy issue. Furthermore, a plan fiduciary or proxy voter may use detailed guidelines to defend against complaints about votes by demonstrating that a given vote was based on their guidelines and was not influenced by a conflict of interest. Some guidelines include what steps a proxy voter should take to prevent a fiduciary breach and ensure that the vote is made solely in the interest of participants when a conflict of interest exists. Similar to the recent SEC rule requiring mutual funds and investment advisers to disclose “the procedures that a mutual fund company/complex and investment advisers use when a vote presents a conflict….” some pension plan fiduciaries include such a discussion in their guidelines. For example, the guidelines of one plan fiduciary we examined indicate that, in the case of a conflict of interest, the issue is to be reported to the president and general counsel of the plan sponsor who decide how to proceed and ensure that a record of the conflict and the related vote is maintained. In addition, some fiduciaries provide further detail about what constitutes a conflict of interest. For example, one plan’s guidelines define a conflict of interest as being “a situation where the Proxy Analyst or Proxy Committee member, if voting the proxy, has knowledge of a situation where either” the plan fiduciary “or one of its affiliates would enjoy a substantial or significant benefit from casting its vote in a particular way.” In addition to developing detailed guidelines, some plan fiduciaries voluntarily make their guidelines/policies and procedures available to the public, as SEC has required mutual funds to do. Some public pension plans disclose their guidelines on their Web sites, making them available not only for participants and beneficiaries but also the general public. The officials of some private plans indicated to us that they would probably produce a copy of their guidelines if explicitly requested by a participant, though they admitted that such a request is rarely, if ever, made. SEC addressed the issue of disclosure, when, in 2003, it began to require mutual funds to disclose their voting policies and procedures in their registration statement. Mutual fund policies and procedures are required to be available at no charge to shareholders upon request. Also, mutual funds must inform shareholders that the policies and procedures and votes are available through SEC’s Web site, and, if applicable, on the fund’s Web site. SEC made the case for guideline disclosure by stating that, “shareholders have a right to know the policies and procedures that are being used by a fund to vote proxies on their behalf.” Many fund industry members publicly supported SEC’s disclosure rule through comment letters sent to SEC after the rule proposal was released. Officials for one mutual fund company, for example, supported guideline disclosure because the transparency resulting from disclosure would encourage mutual funds to make better proxy voting decisions, which in turn could enhance fund performance. Also, they believed that guideline disclosure would deter casting proxy votes that are not in the best interest of shareholders. Some plan fiduciaries also publicly disclose their proxy votes in an attempt to manage conflicts of interest. We met with officials of some public pension plans that disclose proxy votes on their Web sites, making them available not only to participants and beneficiaries, but also to the public. While some public plans disclose only the votes of a few hundred different equities, other plans disclose all their votes. These funds present a list of companies and how relevant proxies for that company have been voted during a specified timeframe. In addition, one plan sometimes includes a note that briefly explains the rationale for their vote (e.g., why they withheld their vote for a certain director). Two plans, whose officials we met with, also disclose the number of shares that were voted on each proxy. In April 2003, a SEC rule went into effect requiring mutual funds to disclose, on an annual basis, a record of all proxy votes cast during the previous year. Mutual fund votes are required to be available on the fund’s Web site or provided at no charge to shareholders upon request. Also, mutual funds must inform shareholders that the votes are available through SEC’s Web site. SEC, in its rule release on mutual fund proxy vote disclosure, stated that the overall costs of disclosure are reasonable. The experience of the plans we examined that disclose their votes indicates that their costs are not substantial and not a serious burden because proxy voting is done electronically, and voting records are required to be maintained. Some experts we interviewed argue that proxy vote disclosure can benefit participants by giving them information on how the plan votes proxies and providing an incentive to the plan fiduciary or proxy voter to vote appropriately. Disclosure would allow plan participants to review votes and raise questions as to whether votes were made appropriately. The knowledge that participants and beneficiaries might complain to the plan and to others if they believe a breach of its fiduciary duty has taken place may encourage fiduciaries to vote appropriately to avoid such problems. Some experts said that participants would be overwhelmed by the information and would not understand what to do with it. In addition, a few experts have said that it is possible that, while participants might not have the time or the knowledge to analyze proxy votes, an investigative journalist might look at votes of a certain pension plan and publicly discuss any possible breaches they have uncovered or notify the appropriate authorities if any breaches are found or are suspected. Proxy voting disclosure may also influence the voting behavior of fiduciaries, as seen in the example of one large mutual fund. As reported in the news, one large mutual fund voted in favor of the full slate of directors nominated to serve on the board of directors on 29 percent of proxy contests in which they voted in 2003, while in 2002 the fund had voted in favor of the full slate in 90 percent of the contests. And while the fund had voted for 100 percent of auditor approvals in 2002, in 2003 it had voted for only 79 percent. Experts we interviewed said that SEC’s disclosure rules might have contributed to that change in behavior. Nine of 12 respondents to our written interview support proxy vote disclosure by pension plan fiduciaries and many experts we spoke with also support proxy vote disclosure by plans. Very few respondents and experts we interviewed believed that disclosure of votes would not benefit pension plan participants. Specifically, they cited as reasons that: (1) the costs of disclosure outweigh any benefits to participants; (2) there is the potential for politicizing proxy voting; (3) disclosure may serve as a detriment to the investment manager’s investment strategy; and (4) participants lack interest in proxy voting. Some plan fiduciaries have voluntarily taken additional steps to help manage conflicts of interest that may lead to breaches of fiduciary duty, including implementing structural protections and special proxy voting procedures. For example, a few plans we reviewed structure their organization to separate those who cast votes from executives who make policy decisions about the plan. Some plans delegate the responsibility for proxy voting in a way that protects against fiduciary breaches. One public plan, for example, had external asset managers cast proxy votes, but decided to bring the proxy voting process in house to avoid having the plan’s proxies voted on both sides of an issue. By doing all voting internally, plan fiduciaries can provide better safeguards ensuring that votes are cast solely in the interest of participants and provide consistency to how votes were cast. In order to address concerns about conflicts of interests related to employer stock in pension plans, a few pension plan officials we interviewed said that their company stock is managed and proxies are voted by an independent fiduciary outside of the company. In other cases, some fiduciaries use independent proxy-voting firms for research and analysis or to cast proxy votes on their behalf. For example, officials from one plan that we met with told us that they use an outside proxy-voting firm to make the vote decision when a conflict exists. One asset manager, for example, did so during a contentious merger in which their Chief Executive Officer was a director of the acquiring company. Some fiduciaries we met with have an outside proxy-voter execute proxy votes based on their plan’s own guidelines. Other fiduciaries simply use outside proxy-voter firms to provide analysis and research, which the fiduciary may then use to help determine how to vote. Outside proxy voting firms are not without their own conflicts of interest, however. Some proxy-voting firms have expanded to other services. One firm, for example, provides a service to corporations in helping design proxies to improve the chances that proxy issues will succeed. A conflict of interest would exist when the proxy-voting firm has to vote on a proxy that it helped create or when it must vote a proxy for the same company from which it received revenue for some other service. In addition to the structural protections some fiduciaries have put into place, some fiduciaries have implemented special procedures that are used when a conflict exists. For example, according to officials at one company we interviewed, if a proxy vote is to be cast not in accordance with the plan’s guidelines, then the vote is decided by the plan’s proxy committee, which is also required to note why the vote was inconsistent with plan guidelines. At other plans we reviewed, in the event that an attempt is made to influence a proxy vote, the plan’s executive committee makes the vote decision. Additionally, officials from one private plan said that when a material conflict of interest exists an independent third-party proxy voter is given the responsibility to determine how to vote, based on the plan’s guidelines. Furthermore, this plan has a “Material Conflict of Interest Form” which is filled out and signed by the voting analyst and a member of the plan sponsor’s proxy committee. This form includes information on the stock being voted, the issue being voted on, what the plan’s proxy voting guidelines indicate about that issue, details on the conflict of interest, and certification from the third-party proxy voter on how the vote was cast. In addition, at another plan, when a material conflict of interest exists during a proxy vote, the vote is reported to the president and general counsel of the plan sponsor. They decide how to address the situation, such as getting an outside vote recommendation or disclosing the existence of the conflict. A record of meeting notes and issues surrounding conflicts are maintained by the plan in case any questions arise. The Department of Labor’s enforcement of proxy voting requirements has been limited for several reasons. First, participant complaints about voting conflicts are infrequent, at least in part, because votes cast by a fiduciary or proxy voter generally are not disclosed; therefore, participants and others are not likely to raise questions regarding whether a vote may not have been cast solely in their interest. In addition, for the department, ERISA presents legal challenges for bringing proxy voting cases. Specifically, because of the subjective nature of fiduciary votes, it is difficult to obtain evidence that would prove the plan fiduciary was influenced by something other than the interests of participants. Furthermore, even if such evidence could be obtained, monetary damages are difficult to value and, because the department has no statutory authority to impose a penalty without assessing damages, fiduciary penalties are difficult to impose. In part, because of these challenges, but also because of its limited resources, DOL’s reviews of proxy voting in recent years have been limited. As a result, some experts we interviewed do not view the department as a strong enforcement agent. Challenges exist in the proxy voting system that limit DOL’s ability to identify breaches and to prove that a fiduciary was influenced to act contrary to the interests of plan participants. In March 2002, we reported that DOL enforces ERISA primarily through targeted investigations. DOL determines what issues it will investigate using a multifaceted enforcement strategy, which ranges from responding to participant and others’ concerns to developing large-scale projects involving a specific industry, plan type, or type of violation. DOL also uses the Annual Returns/Reports of Employee Benefit Plans (Form 5500 Returns) to identify potential issues for investigation. In addition, its regional outreach activities, while aimed primarily at educating both plan participants and sponsors, are used to gain participants’ help in identifying potential violations. Although DOL’s strategy includes a number of ways to target investigations, DOL officials consider information provided by plan participants and beneficiaries an integral starting point to developing many of its investigations. For instance, through information provided in summary annual reports (SARs), summary plan descriptions (SPDs), individual benefit statements, and other related reports, participants have access to financial and operational information regarding their pension plan and their accrued benefits. The information provided in these reports can help participants and beneficiaries monitor their plans and identify some warning signs that might alert them that possibly there is a problem warranting DOL’s attention. While participant complaints might be useful in targeting some DOL investigations, relying on participant complaints may not currently be the most effective way to identify potential proxy voting cases. Because of the current limited level of disclosure, DOL receives few complaints related to proxy voting. For instance, as previously mentioned, the SARs and other related reports provide plan financial and operational information; however, they do not contain proxy voting information such as voting guidelines and a record of how votes were cast. In addition, DOL officials told us that proxy votes and guidelines are disclosed to the plan and guidelines must be made available to participants and beneficiaries when requested. However, one expert explained that participants generally do not know to ask for this information. As such, they are not likely to raise questions about whether or not a vote was cast solely in their interest. Likewise, because proxy votes are not publicly disclosed, complaints to DOL from those outside of plan participants and beneficiaries are less likely to occur. In addition to difficulties identifying potential breaches in the proxy voting system, difficulties proving under ERISA that a fiduciary was influenced to act contrary to the interests of plan participants are also a challenge for DOL. Because a plan fiduciary’s vote requires judgment, determining what influenced his or her vote can be difficult. If a plan fiduciary can provide his or her rationale for voting a certain way—proving that, in his or her opinion, proxies were voted solely in the interest of plan participants—it is very difficult for DOL or others to prove otherwise. Proving a fiduciary breach requires evidence that the plan fiduciary was influenced in the voting by something other than the interests of plan participants. Certain information—such as existing conflicts of interest between the plan fiduciary and some other influential party, the plan fiduciary’s own self- interest, or the potential impact of certain votes, for instance—are important when trying to establish that such influence was acted upon. Absent this or similar information, leaks by informed parties— whistleblowers—are likely to be the only way one might prove a breach actually occurred. Another challenge that DOL faces is that monetary damages are difficult to value and, therefore, penalties and other sanctions are difficult to impose. According to DOL, it is difficult to link a single proxy vote to damages to the plan participants. This is often the case because there are many economic variables that have an impact on share value. That is, underlying economic factors such as fiscal policy, monetary policy, unemployment, the threat of inflation, the global economy, and currency valuations are all major determinants of share value. Therefore, it is difficult to isolate the effect a single proxy vote may have had. Also, because of the potential for a vote to have a long-term rather than a short-term effect on share value, damages may not be immediately evident. In addition, while the research community and others have differing opinions about whether proxy votes have economic value, where it is believed that these votes do have a value, the determination of this value can be complicated. For example, in response to our written interview, most experts who responded to this question indicated that valuing proxy votes is a complex task, its difficulty dependent upon variables such as the issue being voted on and an entities’ governance structure. One respondent said that a case could possibly be made if a decline in the value of a company could be tied to the specific point in time when the plan fiduciary voted for a self-serving measure. However, the fiduciary’s vote would have to be significant enough to affect the outcome of the proxy contest. Using the Hewlett-Packard situation as an example, the respondent added that one cannot know what the value of Hewlett- Packard shares would have been if the merger had not gone through and thus one cannot calculate the difference between that value and the current value of the merged Hewlett-Packard/Compaq shares. Additionally, others commented that, in the end, DeAM’s vote might not have affected the outcome of the proxy contest. With respect to penalties, unlike SEC, which has the authority to impose a penalty without first assessing and then securing monetary damages, DOL does not have such statutory authority and, as such, must assess penalties based on damages or, more specifically, the restoration of plan assets. Under Section 502(l), ERISA provides for a mandatory penalty (1) against a fiduciary who breaches a fiduciary duty under, or commits a violation of, Part 4 of Title I of ERISA or (2) against any other person who knowingly participates in such a breach or violation. This penalty is equal to 20 percent of the “applicable recovery amount,” or any settlement agreed upon by the Secretary or ordered by a court to be paid in a judicial proceeding instituted by the Secretary. However, the applicable recovery amount cannot be determined if damages have not been valued. As we reported in 1994, this penalty can be assessed only against fiduciaries or knowing participants in a breach who, by court order or settlement agreement, restore plan assets. Therefore, if (1) there is no settlement agreement or court order or (2) someone other than a fiduciary or knowing participant returns plan assets, the penalty may not be assessed. Because DOL has never found a violation that resulted in monetary damages, it has never assessed a penalty or removed a fiduciary as a result of a proxy voting investigation. As a result of challenges in the proxy voting system, DOL has devoted few resources to proxy voting over the last several years. Between 1988 and 1996, DOL conducted three enforcement studies to determine the level of compliance with proxy voting requirements among select fiduciaries (see table 1). The first of these projects was initiated in May 1988, when the department looked at the management of plan votes from a broad range of investment managers, with a particular focus on certain contested issues considered at annual shareholders’ meetings in that year. Then in 1991, DOL started its second project to determine how banks were fulfilling their responsibilities with respect to proxy voting practice. DOL looked at proxy voting procedures at 75 banks, covering the application of procedures during the 1989 or 1990 proxy season. Finally, during its last project, the department once again reviewed the practices of investment managers—12 in total—alongside 44 pension plans, with respect to corporate governance issues. It reviewed certain proxy votes at five annual shareholders’ meetings held in 1994 and general proxy voting polices and practices. According to DOL, overall the enforcement studies found that there were improvements in proxy voting practices as virtually all plans and investment managers in the studies voted their proxies. The enforcement studies also found that additional improvement is needed in the plans’ monitoring of investment managers to ensure that proxies are voted in accordance with stated policies. Furthermore, they found that although investment managers appear to have the records to enable clients to review managers’ decisions on proxy voting, few plan clients actually review the reports that are automatically provided to them. In the situations in which reports are available upon request, few plans request a copy. Given these findings, the department has not conducted similar reviews in recent years to determine current levels of compliance. DOL officials told us that they believe that proxy voters are generally in compliance, that they receive few complaints in this area, and that they focus most of their limited resources on other priority areas, which may result in identifying violations that can be corrected. DOL officials said that they typically do not conduct specific investigations focused on proxy voting, and they allocate few resources to this issue. They, instead, focus its limited resources according to their Strategic Enforcement Plan. However, proxy voting practices may be examined during their investigations of investment managers. DOL said that its investment management investigative guide has steps for reviewing proxy voting, but the investigators have discretion whether to review proxy voting practices. According to DOL officials, investigators receive training on the general fiduciary obligations of named fiduciaries and investment managers with respect to the voting of proxies on plan-owned stock. When asked how often these reviews included the examination of proxy voting, DOL officials responded that this information is not tracked. Some plan fiduciaries and industry experts that we interviewed have indicated that DOL lacks visibility as an enforcement agent in this area. For example, some experts said that DOL’s examination of proxy voting practices does not seem to occur routinely and that it is not clear what enforcement action DOL has taken in recent years related to proxy voting. Additionally, others have described an environment that provides little incentive to do what is best for participants, indicating that fiduciaries have no expectation that DOL will take action should they breach their proxy voting responsibilities. One DOL official said that the department has made its position on proxy voting known and issued clear guidance on what is required of fiduciaries. Also, given its limited statutory authority and resources, the department has a strategic enforcement plan, and based on this plan, they place their limited resources in areas that will result in identifying violations that can be corrected. The retirement security of pension plan participants is dependent on decisions made each day in the market place by pension plan fiduciaries. DOL guidance requires fiduciaries to cast proxy votes solely in the interest of plan participants and beneficiaries. While ERISA requires that voting guidelines be made available to participants upon request, ERISA does not require disclosure of proxy votes to participants and the public. Increased transparency of both proxy guidelines and votes could provide participants and others with information needed to monitor actions that affect retirement assets. Nor does ERISA require, as current SEC regulations do for mutual fund investment companies and investment advisers, that plans include in their guidelines language regarding what actions fiduciaries will take to respond to conflicts of interest. However, some plan fiduciaries have taken actions to manage conflicts of interest, including maintaining proxy voting guidelines and disclosing votes. Likewise, a few plan sponsors have hired independent fiduciaries to manage company stock in their pension plans. DOL’s role in enforcing ERISA’s fiduciary provisions, including proxy voting requirements, is essential to ensuring that plan fiduciaries are voting solely in the interest of plan participants and beneficiaries. Yet, DOL has faced a number of enforcement challenges, including legal requirements restricting its ability to assess penalties under ERISA. Furthermore, DOL officials said that the agency does not have the statutory authority to require plan fiduciaries to periodically and publicly disclose proxy votes and guidelines. SEC, because of its role in protecting all investors, including those in participant-directed retirement savings plans, has taken steps to increase transparency in the mutual fund industry. DOL’s inability to take similar steps with respect to pension plan fiduciaries may provide inappropriate incentives for fiduciaries not to act solely in the interest of plan participants when voting proxies. Furthermore, given both DOL and SEC goals to protect plan participants as investors, coordination of their efforts to achieve this goal is important. If the Congress wishes to better protect the interest of plan participants and increase the transparency of proxy voting practices by plan fiduciaries, it should amend ERISA to require that plan fiduciaries develop and maintain written proxy-voting guidelines; include language in voting guidelines on what actions the fiduciaries will take in the event of a conflict of interest; and given SEC’s proxy vote disclosure requirements for mutual funds, annually disclose votes as well as voting guidelines to plan participants, beneficiaries, and possibly also to the public. From a practical perspective, this disclosure could apply to all votes, but at a minimum, it should include those votes that may affect the value of the shares in the plan’s portfolio. Such disclosures could be made electronically on the applicable Website. Since many plans often use multiple fiduciaries for voting proxies, the plan also could provide participants and others directions on how voting records by the various fiduciaries could be obtained. We believe that Congress should assure that participants have the right to request proxy voting records at least annually, consistent with their current right to request other plan documents. Congress should also consider amending ERISA to give the Secretary of Labor the authority to assess monetary penalties against fiduciaries for failure to comply with applicable requirements. Finally, Congress should consider amending ERISA to require that, at a minimum, an independent fiduciary be used when the fiduciary is required to cast a proxy vote on contested issues or make tender offer decisions in connection with company stock held in the company’s own pension plan. In our view, this independent fiduciary requirement would not affect votes by a participant in an eligible individual account plan. To improve oversight and enforcement of proxy voting, we recommend that the Secretary of Labor direct the Assistant Secretary of EBSA to increase the Department’s visibility in this area by conducting another enforcement study and/or taking other appropriate action to more regularly assess the level of compliance by plan fiduciaries and external asset managers with proxy voting requirements. Such action should include examining votes, supporting analysis, and guidelines to determine whether fiduciaries are voting solely in the interest of participants and beneficiaries, and enhancing coordination of enforcement strategies in this area with SEC. We provided a draft of this report to DOL and SEC for their review and comment. DOL's comments are included in appendix II; SEC did not provide written comments. Both agencies provided technical comments, which we have incorporated as appropriate. In its response to our draft report, DOL generally disagreed with our matters for congressional consideration and recommendations, saying that conflicts of interest affecting pension plans are not unique to proxy voting and that requiring independent fiduciaries and increased disclosures would increase costs and discourage plan formation. DOL also said that the enforcement studies of proxy voting practices undertaken previously by the department provide an adequate measure of compliance in this area and, therefore, to undertake new such studies, with an expectation of finding no significant level of noncompliance, would be an inappropriate use of resources. Our recommendations and matters for congressional consideration are predicated on two principles: additional transparency and enhanced enforcement presence. We believe that disclosing pension plans' proxy voting guidelines and votes makes it more likely that votes will be cast solely in the interest of plan participants, and that a visible enforcement presence by DOL helps to reinforce the public interest in this result. So although we agree with certain of DOL's points, we cannot agree that additional transparency and an enhanced enforcement presence would not be beneficial. Furthermore, because DOL believes that it does not have the authority to require proxy voting guidelines and disclosure of votes, and, in our view, it is important to shed more light on events such as proxy voting---particularly contested proxy votes---we believe Congress should consider amending ERISA to include such requirements. We acknowledge that plan fiduciaries face conflicts beyond proxy voting and that conflicts associated with casting a proxy vote may be no greater than the potential for conflicts in making other fiduciary decisions. However, our work and, therefore, our recommendations are focused on issues related to proxy voting. Furthermore, we found that DOL’s enforcement in this area has been limited, which may not be the case in its oversight of other fiduciary actions. For example, tender offer decisions made by fiduciaries may suffer from similar conflicts. DOL, however, has been able to develop investigative cases and secure positive results for plan participants and beneficiaries in connection with this area. However, DOL has not been similarly successful in developing proxy voting cases. Given that plan participants may be particularly vulnerable when internal fiduciaries vote employer stock held in the plan sponsor’s s own pension plan, we believe it is an appropriate safeguard to require an independent fiduciary be appointed to vote these proxies. We are recommending independent fiduciaries for certain circumstances. Furthermore, in our view, this independent fiduciary requirement would not affect votes by a participant in an eligible individual account plan. In disagreeing with our recommendation that Congress consider amending ERISA to require that an independent fiduciary be used to vote proxies for employer stock held in a plan sponsor’s own pension plan, DOL said that the Congress already considered, but did not include, an independence requirement for plan fiduciaries when it passed ERISA in 1974. We acknowledge that Congress did not require independent fiduciaries when it originally enacted ERISA. However, the conflicts of interest associated with plan holdings of company stock have received increased public attention in the last several years, and we believe the Congress should reconsider ERISA’s current legal requirements in connection with company stock. In response to our recommendation that DOL conduct another enforcement study to determine the level of compliance with proxy voting requirements, DOL said that it has seen no evidence of a negative change in the level of compliance and that another proxy enforcement study would absorb a considerable amount of resources. Rather than conducting another proxy enforcement study, DOL said that it would evaluate proxy voting information during its investigations in the financial services area. As we discuss in our report, limited statutory authority and other challenges are obstacles to effective DOL enforcement in this area. Furthermore, we understand that DOL must balance efforts in this area with other enforcement priorities. The statutory changes we have suggested, if enacted, may help DOL’s enforcement efforts in the future. Nonetheless, even with such changes, we believe that conducting reviews of proxy voting issues on a periodic basis is important to ensure compliance and increase DOL’s presence and visibility in this area. We acknowledge that conducting another enforcement study is just one of various options available to DOL to accomplish these goals and have altered our recommendation to be explicit on this point. However, in our view, any review in this area should go beyond simply determining whether fiduciaries cast proxy votes, and should include assessing whether plans are monitoring proxy voting practices by external investment managers and evaluating whether fiduciaries voted solely in the interest of plan participants and beneficiaries. Regarding our matter for congressional consideration that plan fiduciaries be required to disclose proxy voting guidelines and votes, at a minimum, to plan participants, DOL noted that appropriate plan fiduciaries are required to monitor proxy voting information and that proxy voting guidelines are available to participants upon request. DOL further said that requiring disclosure to the general public or even to all participants would significantly increase costs to plans. Recognizing that ERISA’s disclosure requirements are focused on plan participants and beneficiaries, not the general public, we modified our matter for congressional consideration to state that proxy guidelines and votes should at a minimum be disclosed to participants and beneficiaries. Our report addressed concerns about the potential costs of disclosing proxy voting guidelines and votes by suggesting that such information could be made available electronically. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Secretary of Labor, the Chairman of the Securities and Exchange Commission, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-7215 or George Scott at (202) 512-5932. See appendix III for other contributors to this report. To determine what conflicts exist in the proxy voting system and the extent to which fiduciary breaches occur as a result of these conflicts, we interviewed officials at the Department of Labor’s Employee Benefits and Security Administration (DOL) and at the Securities and Exchange Commission (SEC). Using a standard set of questions, we conducted interviews with proxy voting experts, academics, economists, and Employee Retirement Income Security Act (ERISA) attorneys. We also interviewed various proxy voting experts which include academics, ERISA lawyers, industry experts, pension plan sponsors, asset managers, proxy voting firm representatives, proxy soliciting companies, and plan practitioners. These experts were, in part, selected from news articles involving abuses in the mutual fund industry, from news reports regarding corporate scandals such as Enron, from reported highly contested proxy contests, from historical articles dated back to the proxy scandals in the 1980s and 1990s, and from recent reports in the news and SEC’s Web site pertaining to SEC’s proxy voting disclosure proposals. Experts were also selected based on published research on proxy voting, based on discussions with plan sponsors and industry experts, congressional testimony, and Congressional Research Service reports. To determine what safeguards fiduciaries have put in place to protect against breaches, we interviewed a number of public and private pension plan sponsors, asset managers, proxy voting firm representatives, and other experts. These public and private pension plans were selected for their promising practices based on discussions with industry experts, from pension industry publications and other published reports of the corporate governance practices of these plans. To explore the practices of internally managed plans, we interviewed various proxy voting experts and interviewed officials of the plans listed in the Pensions and Investments with internally managed assets. To determine DOL’s enforcement of proxy voting requirements, we interviewed officials at EBSA and reviewed DOL enforcement material and previously issued GAO reports on DOL’s enforcement program. To determine the extent to which private pension plans invested in their own employer securities, we obtained the total value of the employer stock in the company’s pension and welfare benefit plans. To do so, we analyzed plan financial information filed annually (Form 5500s) with the Internal Revenue Service and EBSA. The Form 5500 report is required to be submitted annually by the administrator or sponsor for any employee benefit plan subject to ERISA as well as for certain employers maintaining a fringe benefit plan. The report contains various schedules with information on the financial condition and operation of the plan. The total value of employer shares information is provided on either schedule H or schedule I depending on the number of participants covered by the plan. EBSA provided us with a copy of the 2001 electronic Form 5500 database for our analysis. We assessed the reliability of these data for our purposes by evaluating the electronic records selected for analysis for outliers, duplicate records, and otherwise inappropriate values. Form 5500 records that did not meet our review standards were eliminated from our analysis. We decided to focus our analysis of companies with Form 5500 data to those corporations listed in the Fortune 500. To do so, we matched each Fortune 500 company to their pension plans on the basis of their Employer Identification Numbers (EINs). We used several methods to identify EINs associated with each corporation. We started with a list of EINs for Fortune 500 companies that was purchased from Compustat (a database from Standard & Poors). To identify the EINs for the remaining companies, we searched the 10K annual filing statement for each relevant company. We then searched those companies whose Form 5500s reported that they held their own employer securities at the year’s plan end year date. This resulted in a database for filing year 2001 containing the information of 490 Form 5500 returns filed by 272 of the Fortune 500 companies. To analyze the total voting power of those 272 Fortune 500 companies on our list for plan year 2001, we obtained the proxy statements filed with SEC as form 14-A DEF for those companies. Form 14-A DEF statements are the final annual proxy statements sent to all shareholders of a corporation that detail all the issues that are to be voted on. The statements also list the number of shares entitled to vote on the proxy issues and, where applicable, the number of votes per share (e.g., some companies might issue different classes of preferred stock which entitle the owner to more than one vote per share). For each company, we multiplied the number of shares outstanding for each class of share by the number of votes entitled to that class and added up those figures for all classes of shares to get a reflection of total number of shareholder votes. We used data from the 14-A DEF statements filed as soon after the end of calendar year 2001, which was typically in the spring of 2002. We also obtained share price data from the New York Stock Exchange’s (NYSE) Trade and Quote (TAQ) database. We used that database to obtain the closing price (the price of the last transaction of the day) on the day indicated as the plan end of year date from the Form 5500 for each company. The TAQ database contains a listing of intraday transactions (including shares involved and the price) for all companies listed on the NYSE, the National Securities Dealers Stock Exchange (NASDAQ) and the American Stock Exchange (AMEX). To ensure the reliability of the TAQ price date, GAO economists previously conducted a random crosscheck of the TAQ data with data provided by NADAQ, Yahoo! Finance, and other publicly available stock data sources. From the 5500 data, we obtained the total value at yearend for company stock holdings by corporations in their pension and welfare benefit plans. From the TAQ database, we obtained the closing price of the stock on the plan yearend date. We then divided the closing price of the stock into the total value at yearend to get a number of voting shares held in the company’s pension and welfare benefit plans. We then divided the total votes outstanding (i.e., total number of votes based on available classes of stock for each of our Fortune 500 companies) by the number of votes controlled by the pension plan to obtain the voting power, or the percentage of votes controlled by the company’s pension and welfare benefit plans. Other major contributors include Gwendolyn Adelekun, Matthew Rosenberg, Gene Kuehneman, Lawrance Evans, Alison Bonebrake, Derald Seid, Corinna Nicolaou, Michael Maslowski, Roger J. Thomas, Richard Burkard, and Kenneth J. Bombara.
In 1998, about 100 million Americans were covered in private pension plans with assets totaling about $4 trillion. The retirement security of plan participants can be affected by how certain issues are voted on during company stockholders meetings. Fiduciaries, having responsibility for voting on such issues on behalf of some plan participants (proxy voting), are to act solely in the interest of participants. Recent corporate scandals reveal that fiduciaries can be faced with conflicts of interest that could lead them to breach this duty. Because of the potential adverse effects such a breach may have on retirement plan assets, we were asked to describe (1) conflicts of interest in the proxy voting system, (2) actions taken to manage them, and (3) DOL's enforcement of proxy voting requirements. Conflicts of interest in proxy voting can occur because various business relationships exist, which can influence a fiduciary's vote. When a portion of a company's pension plan assets are invested in its own company stock, the internal proxy voter may be particularly vulnerable to conflicts of interest because management has an enhanced ability to directly influence their voting decisions. Although situations representing conflicts will occur, limited disclosure of proxy voting guidelines and votes may make proxy voting more vulnerable to such conflicts. Because of limited transparency, concerned parties do not have the information needed to raise questions regarding whether proxy votes were cast solely in the interest of plan participants and beneficiaries. Some plan fiduciaries and the Securities and Exchange Commission (SEC) have taken steps to help manage conflicts of interest in proxy voting. Specifically, some plans voluntarily maintain detailed proxy voting guidelines that give proxy voters clear direction on how to vote on certain issues. The SEC has imposed new proxy voting regulations on mutual funds and investment advisers, requiring that specific language be included in the fund's guidelines on how fiduciaries will handle conflicts of interest. Some plan fiduciaries voluntarily make their guidelines available to participants and the public. In addition, some plans voluntarily disclose some or all of their proxy votes to participants and the public. Some plans also voluntarily put additional procedures in place to protect proxy voters from conflicts of interest in order to avoid breaches of fiduciary duty. For example, some plan sponsors hire independent fiduciaries to manage employer stock in their pension plans and vote the proxies associated with those stock. Plans may also hire proxy-voting firms to cast proxies to ensure that they are made solely in the interest of participants and beneficiaries. DOL's enforcement of proxy voting requirements has been limited for several reasons. First, participant complaints about voting conflicts are infrequent, at least in part, because votes cast by a plan fiduciary or proxy voter generally are not disclosed; therefore, participants and others are not likely to have information they need to raise questions regarding whether a vote has been cast solely in their interest. Second, for DOL, the Employee Retirement Income Security Act of 1974 presents legal challenges for bringing cases such that it is often difficult to obtain evidence that the fiduciary was influenced in his or her voting by something other than the sole interests of plan participants. Finally, even if such evidence existed, monetary damages are difficult to value and fines are difficult to impose. And, DOL has no statutory authority to impose a penalty without first assessing damages and securing a monetary recovery. In part, because of these challenges, DOL has devoted few resources to enforcing proxy voting by plans.
The Crime Awareness and Campus Security Act of 1990 and its implementing regulations require colleges, as a condition for participating in federal financial aid programs authorized under title IV of the Higher Education Act of 1965, as amended, to publish and distribute an annual security report that includes statements about campus law enforcement policies, security education and crime prevention programs, alcohol and drug policies, sexual assault education and prevention programs, procedures for reporting sexual assaults, procedures explaining how reports of sexual assaults will be dealt with, and annual statistics on crime incidents. The law also requires colleges to provide timely warning to the campus community about crimes that are considered to represent a threat to other students and employees. The law requires the collection of data on campus crime, distinct from state or local data, and that information on the incidence of campus crime and of colleges’ security policies and procedures be available. The statistical reporting provision requires colleges to annually compile and report to the campus community statistics on reported crimes, such as murder and robbery, and on arrests for such crimes as liquor law violations. As the agency administering title IV programs, the Department of Education is responsible for issuing guidance to implement the law, monitoring colleges’ compliance with its requirements, and issuing two reports: a compilation of exemplary campus security practices and a report to the Congress on campus crime statistics. Procedures for monitoring compliance with title IV requirements include program reviews of selected colleges, annual independent audits of all colleges participating in title IV, and compliance reviews in response to complaints received. According to a 1996 publication of the Student Press Law Center, 11 states have laws requiring schools to compile and release statistics on campus crime. Two bills—H.R. 2416 and S. 2065—introduced in the 104th Congress would have required more detailed and current campus security records to be made accessible to the public. Although a hearing was held in the House, no further action was taken before the session’s end. Had the bills been enacted, they would have applied to colleges with police or security departments and required the colleges, in addition to reporting annual crime statistics, to maintain open-to-the-public, easily understood daily logs that chronologically recorded all crimes against persons or property reported to college campus or security departments. The bills were modeled after a law that has been in effect in Tennessee since 1994. Department implementation of the Crime Awareness and Campus Security Act’s reporting requirements has included issuing regulations; disseminating policy guidance to colleges; providing technical assistance to colleges and outreach to campus law enforcement organizations; and, to a limited extent, checking whether colleges have prepared crime statistics reports and what procedures they have used for disseminating the reports. However, because of resource constraints, the Department has only recently expanded its monitoring efforts by initiating program reviews that specifically address compliance with the act’s reporting requirements. Moreover, the Department was late in issuing a required report to the Congress. Following enactment of the law in 1990, the Department issued various policy guidance documents on campus security to help colleges meet the law’s requirements, as summarized in table 1. Most of the guidance was issued as Department letters. Final implementing regulations took effect in July 1994. The Department supplemented its policy guidance with technical assistance provided upon request by its Customer Support Branch. To help colleges achieve compliance, the Department emphasizes providing such assistance, rather than imposing sanctions. Under Department policy, the Secretary imposes sanctions only if a college flagrantly or intentionally violates the regulations or fails to take corrective action when required to do so. Available sanctions include fines or limitation, suspension, or termination of participation in federal financial aid programs. Department officials told us that although the Department and independent auditors had identified violations at 63 colleges since the law’s enactment, as of January 1997, the Department had not imposed sanctions against any college found in noncompliance with campus security requirements. Although the Department began issuing guidance to colleges on complying with the law in 1991, guidance for monitoring program compliance came much more slowly. The Department did not issue its first program review guidance specifically addressing campus security until September 1996. Until this recent incorporation of campus security in program review guidance, the Department’s program reviewers had not emphasized monitoring campus security reports in their title IV reviews, focusing instead on compliance with other provisions of title IV. Although most of the nearly 2,800 title IV program reviews conducted between September 1992 and May 1996 found noncompliance with some title IV program requirements, only 24 of these reviews identified campus security violations. Department officials told us that monitoring had generally been limited to checking whether colleges published a campus security report and had procedures for its distribution. Since no review guidance for monitoring campus security was available until September 1996, it is unlikely that the reviewers checked whether the reports contained all the required information or whether information was accurate. Under the new monitoring guidance, program reviewers must check a college’s crime report for all required information and should attempt to evaluate the procedures used to collect crime data. The accuracy of crime statistics need not be verified unless it becomes apparent from a complaint or some other source that the security report may be incomplete or inaccurate. In such cases, the Department is to take appropriate action to ensure compliance, including more thoroughly examining the statistics and, if warranted, taking formal administrative action. As of January 1997, the Department had received five complaints of noncompliance: one precipitated an in-depth campus security compliance review; the other four complaints are still being investigated. Even with the new guidance, however, program review officials told us that staff are still having some difficulty monitoring compliance. Reasons for the difficulty include reviewers’ limited experience in dealing with law enforcement matters, uncertainties about how to interpret certain definitions of reportable crimes, and differences among campuses that make evaluation difficult under a single set of program review guidelines. In the case of urban campuses, for example, reviewers may have difficulty in determining which facilities are campus related. The difficulties involving definitions and differences among colleges are discussed in more detail later in this report. The Department has yet to issue guidance for independent auditors who conduct federally required annual audits of all colleges participating in title IV programs. The Department’s June 1995 independent audit guide does not provide guidance to auditors on checking for campus security compliance. As of August 1996, only six audits had documented noncompliance on security matters since the act took effect, and a Department official said that most auditors participating in training sessions held in regional Inspector General offices were unaware of campus security reporting requirements, further suggesting that auditors may not be routinely scrutinizing campus security reports. The Department plans to issue an updated audit guide that will explicitly refer to campus security compliance and instruct auditors to ensure that campus security reports are prepared and distributed according to federal requirements. A Department official responsible for writing the audit guide expects it to be issued some time in 1997. Although the Department issued a required report on exemplary campus security practices in September 1994, the Department was more than 1 year late in issuing a report on campus crime statistics to the Congress. The law required the Department to review campus crime statistics and issue a report to the Congress by September 1, 1995. Citing limited resources to perform such a review, the Department postponed issuing the report until February 1997. As the basis for the report, the Department conducted a national survey on campus crime and security. A representative sample of 1,500 colleges was surveyed to establish baseline information on crime statistics by such attributes as type of school (such as 4-year public or 2-year private), nature of the campus (such as urban or rural and residential or commuter), and types of public safety employees providing campus security. Having compiled and reported the survey results, the Department plans to evaluate whether additional actions are needed at the federal level. Our review of selected colleges’ campus security reports and our interviews with selected campus officials indicate that colleges are having difficulty applying some of the law’s reporting requirements. As a result, colleges are not reporting data uniformly. Of the 25 reports we reviewed, only 2 provided information in all the prescribed categories. Table 2 summarizes the principal problems colleges are having. Campus law enforcement officials differ as to whether their reported statistics must include crimes reported to them by other campus authorities without information identifying the persons involved in the reported incidents. For example, according to comments the Department received during rulemaking, students are sometimes more comfortable reporting incidents—particularly sex-related offenses—through academic rather than law enforcement channels. The Family Educational Rights and Privacy Act (FERPA) generally prohibits the disclosure of education records or information from education records, which originally included personally identifiable details on crime incidents. As a result of a 1992 amendment to FERPA, however, reports of incidents maintained by campus law enforcement officials for law enforcement purposes are not now classified as education information and, therefore, may be disclosed. Even incidents reported to campus authorities other than law enforcement officials may be included in the campus crime statistics as long as information identifying the persons involved is not disclosed. But reporting such incidents in the statistics is not required under a Department interpretation of the Crime Awareness and Campus Security Act. According to that interpretation, colleges may exclude from their statistics those incidents that campus law enforcement officials cannot validate because, for example, the parties’ names were not disclosed. The fact that the incidents need not be reported is reflected by variations in campus security reports, as some reports excluded information from non-law-enforcement sources for which no personally identifiable information was provided. Our review of 25 reports prepared by colleges showed that some of the data may have been incomplete or incompatible because of differences in safety officials’ access to information, insistence on verifiable data, or both. Six reports showed direct and varied attempts to address these differences—for example, by supplementing required crime categories with explanatory subcategories, adding a column showing incidents reported to other officials, or adding footnotes. When we asked campus law enforcement officials at the 25 colleges how they treated such cases, we found an even greater variation in their responses than in the reports. For example, nine said their numbers included incidents reported to campus officials who were not law enforcement officials without any notation to that effect, and four said their numbers excluded incidents they could not verify. Some were concerned about reporting incidents for which no details were provided because, without details on specific cases, they were unable to verify that a crime had occurred, had been properly classified, or had been counted only once—if, for example, a crime had been reported to more than one office. At some colleges, security officials do not receive even unverifiable statistics from counselors: Officials at five colleges said counselors are not required to or generally do not report incidents to them, and the general counsel of one state’s higher education organization concurred in that interpretation. Although colleges’ statistical reports included most of the prescribed criminal reporting categories, reporting officials appeared to have difficulty principally with two categories: sex offenses and murder. In 60 percent of the reports we reviewed, colleges had difficulty complying with the reporting requirement for sex-related offenses. Colleges are required to report statistics on sexual offenses; they are not required to distinguish between forcible and nonforcible offenses. Of the reports we reviewed, 15 incorrectly categorized offenses. For example, several colleges listed incidents as “rape” or “attempted rape,” both of which are less inclusive than the term “forcible sexual offense.” We also noted a discrepancy in how colleges reported the number of murders. Seven of the reports we reviewed labeled incidents resulting in death as homicides, but the law requires the term “murders.” According to the Uniform Crime Reporting Handbook, homicide can also include killings that result from negligence, whereas murder refers to willful killings. Because homicide is not as specific a term, the use of this broader category could obscure the actual number of murders. The Department’s regulations for the Crime Awareness and Campus Security Act require colleges to report statistics on murders, forcible rapes, and aggravated assaults that manifest evidence of prejudice based on race, religion, sexual orientation, or ethnicity, as defined in the Hate Crimes Statistics Act. However, of the reports we reviewed, only five included this information. Eleven of the 16 officials we asked about the omission told us they were unaware of the requirement, which was not mentioned in the Department’s letters explaining the statistical reporting requirements. Another two said they lacked direction on how to report these crimes. Although the Crime Awareness and Campus Security Act requires that crime statistics include on-campus occurrences reported to local police, our interviews with college officials and review of their statistical reports suggest that colleges vary in their inclusion of incidents reported to local police. Of the 25 reports we reviewed, 1 specifically stated that it did not include incidents reported to local police, and a second stated that it included such incidents when available. In contrast, six reports indicated that incidents reported to local police were included. According to a law enforcement official we contacted and our analysis of a Department program review, reporting such incidents can be difficult. For example, record systems of some local police departments do not lend themselves to converting the incidents to the categories required for campus security reports. Moreover, identifying incidents at college-related facilities can be a problem when a campus is dispersed throughout a large urban area. For three crime categories—liquor, drug, and weapons possession violations—the law requires statistics on the number of arrests, rather than on the number of reported crimes. For these categories, uniformity of statistics can be affected to some degree by school policies and type of authority of the campus security department. For example, one campus security report we reviewed contained a footnote to the effect that liquor-law violations were frequently adjudicated through campus judicial procedures and, therefore, would not be included in the arrest statistics. Three law enforcement officials told us that offenses are less likely to result in arrests on campuses that do not have security departments with the power to make arrests. We identified eight states that require public access to campus police or security department records on reported crimes: California, Massachusetts, Minnesota, Oklahoma, Pennsylvania, Tennessee, Virginia, and West Virginia. In all but Minnesota, the laws in general apply to all institutions of higher education, public and private. Minnesota’s law applies only to public colleges. Three of the eight states (Massachusetts, Pennsylvania, and Tennessee) have laws specifically requiring campus safety authorities to maintain daily logs open to public inspection. The remaining five, while not prescribing the log format, require disclosure of information similar to that required to be kept in the logs. Certain provisions are common to a number of these state laws. For example, they generally contain a provision exempting disclosure that is otherwise prohibited by law. Many prohibit publication of the names of victims or of victims of sex-related crimes. Many also include some type of provision protecting witnesses, informants, or information that might jeopardize an ongoing investigation. Several law enforcement officials emphasized to us the importance of including such a provision. The laws also differ in a number of other respects, such as the following: California, Pennsylvania, and Oklahoma specifically provide for penalties for noncompliance; the other states do not specify penalties. Only California includes a specific reference to occurrences involving hate; in fact, California’s law requires inclusion of noncriminal hate-related incidents. For more information on the eight laws, see appendix II. We also agreed to determine whether any legal challenges had been raised to state open campus crime log laws and whether the effectiveness of such laws had been studied. We did not find any reported cases challenging these laws or any studies of their positive or negative effects. In addition, according to the Student Press Law Center’s Covering Campus Crime: A Handbook for Journalists, all 50 states have open records or “sunshine” laws, most of which require public institutions’ records to be open to the public unless they are specifically exempted. Generally, public colleges are covered by those laws. For example, Colorado’s open records law declares that it is public policy that all state records be open for inspection, including all writings made, maintained, or kept by the state or any agency or institution—which would include state colleges. These laws generally provide that if records are kept, they must be open; the laws are not intended to impose a new recordkeeping requirement. The consistency and completeness of campus crime reporting envisioned under the act have been difficult to attain for two primary reasons. First, the differing characteristics of colleges—such as their location in an urban or other setting or the extent to which complaints may be handled through campus governance rather than through police channels—affect the colleges’ ability to provide a complete and consistent picture of incidents that occur on their campuses. Second, some confusion exists about reporting requirements, particularly about how certain categories of crimes are to be classified. The Department originally relied mostly on its regulations, letters to colleges, and technical assistance to implement the Crime Awareness and Campus Security Act. Its continued efforts in providing technical assistance to school officials, as well as its recent issuance of monitoring guidance to Department officials and its current work to update audit guidelines for independent auditors, may achieve more consistent reporting and compliance with the law by colleges. For example, these efforts may improve consistency in categories used and type of crimes reported. However, inherent differences among colleges will be a long-term obstacle to achieving comparable, comprehensive campus crime statistics. Although a federal open crime log law could offer more timely access to information on campus crime and a means of verifying the accuracy of schools’ statistical reports, such logs would continue to reflect the inherent differences among colleges apparent in the summary statistics currently required by the act. For example, such logs might not include off-campus incidents or, without an amendment to FERPA, incidents that students report through non-law-enforcement channels. On February 12, 1997, the Department of Education provided comments on a draft of this report (see app. III). The Department generally agreed with our basic conclusions and provided us a number of technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. Please call me at (202) 512-7014 or Joseph J. Eglin, Jr., Assistant Director, at (202) 512-7009 if you or your staff have any questions about this report. Other staff who contributed to this report are listed in appendix IV. To determine the actions the Department of Education has taken to implement and monitor compliance with the Crime Awareness and Campus Security Act, we interviewed officials at the Department’s headquarters and regional offices and analyzed pertinent regulations, policy guidance, and other documents. To identify difficulties colleges were having in complying with the act, we interviewed officials at 27 colleges selected from a judgmental sample of colleges from the following four groups. Members of the International Association of Campus Law Enforcement Administrators (IACLEA)—Ten Colleges. Our initial college law enforcement contact was the Director of Police at the University of Delaware, also a past president of IACLEA and a recognized authority on the Crime Awareness and Campus Security Act. He provided us with the names of chief law enforcement officials at eight IACLEA member colleges, one in each of the eight states with open campus police log laws. These officials, in turn, referred us to two additional member colleges. Non-IACLEA Members in States With Open Log Laws—Eight Colleges. Using a list of non-IACLEA colleges provided by IACLEA, we selected six 4-year and two 2-year colleges representing all eight states with open log laws and spoke to their heads of campus security. All eight colleges had an enrollment exceeding 1,000 students. Colleges in States Without Open Log Laws—Seven Colleges. From a universe of colleges representing all states, we randomly selected colleges, with enrollments exceeding 1,000 students, that participated in title IV programs from the Department’s Integrated Postsecondary Education Data System, stratified by type of college (such as 4-year private or 2-year public) and geographic region. The chiefs of campus security at these seven colleges composed the third group of officials interviewed. Colleges Involved in Complaints About Crime Statistics—Two Colleges. We included two other colleges for information on complaints regarding crime statistics. We included the first of these because a complaint had been lodged against that college. We included the second college because it was subject to the same state crime reporting system as another college—the only one that has undergone an in-depth Department review as a result of a crime statistic complaint. In addition, we asked the campus security officials interviewed to send us a copy of their most recent campus security statistics. We received statistical reports from 25 colleges and evaluated them to determine the extent to which the reports conformed to crime reporting requirements prescribed in the act. We did not trace the numbers to source documents to check their accuracy or completeness. We also searched the literature and reported case law to determine whether any studies had been done on the effects of or legal challenges to state open log laws. We analyzed state statutes and spoke with representatives of campus safety and other interest groups as well as faculty specializing in criminal justice. We performed our work between June 1996 and January 1997 in accordance with generally accepted government auditing standards. Names and addresses of arrested persons and charges against them Exempts from disclosure incidents involving certain types of handicapped persons, which are to be separately maintained (continued) Public and private campuses (under the state’s Campus Security Act, private colleges’ police departments are public agencies for the limited purpose of crime enforcement) Records must be open, if kept; the intent is not to impose a new recordkeeping requirement (continued) Names and addresses of persons arrested Information specifically not required unless otherwise provided by law: names of persons reporting, victims, witnesses, or uncharged suspects or other information related to investigation (continued) Identification required is not specified. Information may be withheld upon certification of need to protect the investigation, but in no event after the arrest. The following staff made significant contributions to this report: Meeta Sharma, Senior Evaluator; Stanley G. Stenersen, Senior Evaluator; and Roger J. Thomas, Senior Attorney. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the progress made under the Crime Awareness and Campus Security Act, focusing on: (1) how the Department of Education has implemented and monitored compliance with the act; (2) the kinds of problems, if any, colleges are having in complying with the act; and (3) the requirements of state laws related to public access to police records on reported crimes on campuses. GAO noted that: (1) although colleges are having difficulty complying with the act, the Department only recently began a systematic effort to monitor compliance; (2) starting in 1991, the Department of Education issued policy guidance to colleges for implementing the law's crime reporting requirements; (3) since that time, the Department has also provided technical assistance to individual colleges upon request; (4) although the Department began issuing implementing guidance to colleges less than 1 year after the law was passed, the Department has only recently begun to develop procedures for its program reviewers and auditors that systematically address monitoring compliance with these requirements; (5) moreover, citing resource limitations, the Department delayed preparing a report on campus crime statistics for which the law prescribed a September 1995 issuance date; (6) the Department issued the report in February 1997; (7) at the campus level, colleges are finding it difficult to consistently interpret and apply some of the law's reporting requirements; (8) for example, GAO's analysis showed considerable variation in colleges' practices for deciding which incidents to include in their reports and what categories to use in classifying certain crimes; (9) areas of difficulty included deciding how to include incidents reported to campus officials other than law enforcement officers, interpreting federal requirements for reporting sexual offenses, and reporting data on hate crimes; (10) federal legislation proposed in the 104th Congress would have augmented available information on campus crime by requiring that campus police records be open to the campus community; (11) similar laws exist in eight states; (12) three laws contain a specific requirement that colleges maintain daily logs; (13) most laws protect the identity of victims and informants from disclosure and ensure that any information that might jeopardize an ongoing investigation also remains confidential; (14) the state laws vary in many details, such as whether identification of juvenile offenders is required and whether noncompliance by the college can result in penalties; and (15) these laws differ from the 1990 act in requiring year-round access to campus police reports rather than annual summary statistics.
PPACA establishes certain conditions governing participation in the CO-OP program. Specifically, PPACA defines a CO-OP as a health insurance issuer organized under state law as a nonprofit, member corporation of which the activities substantially consist of the issuance of qualified health plans in the individual and small group markets in the state where CO-OPs are licensed to issue such plans. PPACA prohibits organizations that were health insurance issuers on July 16, 2009, or sponsored by a state or local government from participating in the CO-OP program. PPACA also requires that (1) governance of a CO-OP be subject to a majority vote of its members;of a CO-OP incorporate ethics and conflict of interest standards protecting against insurance industry involvement and interference; and (3) the operation of a CO-OP have a strong consumer focus, including timeliness, responsiveness, and accountability to its members. In addition, PPACA directs CMS to prioritize the award of CO-OP loans to (2) the governing documents applicants that plan to offer qualified health plans statewide, plan to utilize integrated models of care, and have significant private support. Consistent with PPACA, CMS established two types of loans: start-up loans and solvency loans. Start-up loans cover approved start-up costs including salaries and wages, fringe benefits, consultant costs, equipment, supplies, staff travel, and approved indirect costs. After the initial disbursement of start-up loan funds, subsequent disbursements are to be made according to a disbursement schedule established in the loan agreement between CMS and each loan recipient. Subsequent disbursements are also contingent upon evidence demonstrating the loan recipient’s successful achievement of milestones also established as part of the loan agreement between CMS and each loan recipient. Milestones could include obtaining health insurance licensure and submitting timely reporting information in the required format. Loan recipients can coordinate with CMS to adjust the disbursement schedule and milestones as needed. Each disbursement for a start-up loan must be repaid within 5 years of the disbursement date. Solvency loans assist CO-OPs in meeting state insurance solvency CO-OPs may request disbursements for and reserve requirements.solvency loans “as needed” to meet states’ reserve capital and solvency requirements. Reasons for CO-OPs needing additional solvency disbursements could include enrollment growth and higher than anticipated claims from members. CO-OP requests are subject to CMS review of necessity and sufficiency. Each disbursement of a solvency loan must be repaid within 15 years of the disbursement date. The CO-OP program is a direct loan program. For a direct loan, the estimated long-term cost to the government—known as the credit subsidy cost—is calculated as the net present value of estimated cash flows over the life of each loan. Credit subsidy costs are estimated when the present value of estimated payments by the government (such as loan disbursements) exceeds the present value of estimated payments to the government (such as principal repayments, fees, interest payments, and recoveries). Credit subsidy costs are required to be covered by an appropriation. Therefore, if CMS awards a $10 million solvency loan to a CO-OP, the $10 million represents the payments by the government. If CMS calculates the present value for estimated payment to the government to be $6 million, the loan’s subsidy cost would be the net difference of $4 million and would need to be covered by the CO-OP appropriation. To ensure that applicants met the conditions for participating in the CO-OP program, CMS required applications for the CO-OP program loans to include information about organizational structure and governance, as well as provide bylaws, a business plan, and a feasibility study. CMS and a contractor reviewed applications. The contractor was responsible for evaluating applications and recommending loan amounts and a disbursement schedule based on information in the application package and supporting documents. When making the final decisions regarding the loan awards, CMS officials considered the contractors’ recommendations and other factors, including the size of the loan request and anticipated results of funding the application. By December 2012, CMS had awarded CO-OP program loans to 24 organizations. Several laws had the effect of reducing the $6 billion PPACA originally appropriated for the CO-OP program by about 65 percent and limiting program participation. In 2011, two separate appropriations acts rescinded $2.6 billion of the CO-OP appropriation. Additionally, in January 2013, the American Tax Payer Relief Act of 2012 rescinded another $1.3 billion of the unobligated CO-OP program appropriation, leaving about $1.1 billion of the original $6 billion CO-OP appropriation available for the credit subsidy costs of CO-OP program loan awards and The American Tax Payer Relief Act of CO-OP program administration.2012 also transferred any remaining appropriation to a contingency fund for providing assistance and oversight to CO-OP loan awardees, which essentially restricted CO-OP program participation to the 24 organizations that received CO-OP loan awards prior to January 2013. One organization in Vermont was unable to obtain a license as a health insurance issuer from its state’s insurance commissioner. As a result, CMS terminated the organization from the CO-OP program. Hampshire and the CO-OP from Montana expanded to Idaho, and all offered health plans on the exchanges of those states for the first time. (See fig. 1.) PPACA establishes rules governing how issuers can set premium rates. For example, while issuers are no longer able to consider gender or health status in setting premiums, issuers may consider family size, age, and tobacco use. Also, issuers may vary premiums based on areas of residence. States have the authority to use counties, Metropolitan Statistical Areas, zip codes, or any combination of the three in establishing geographic locations by which premiums may vary, known as rating areas.a low of 1 to a high of 67. Most states have 10 or fewer rating areas. The number of rating areas per state varies, ranging from PPACA also requires that coverage sold include certain categories of benefits at standardized levels of coverage specified by metal level— bronze, silver, gold, and platinum. Each metal level corresponds to a proportion of allowable charges that a health plan is expected to pay on average, known as the actuarial value. Health plans within a metal level have the same actuarial value, meaning each plan pays approximately the same proportion of allowable charges as others. Plans from different metal levels have different actuarial values and pay a higher or lower proportion of allowable charges. For example, a gold health plan is more generous overall than a bronze health plan. Actuarial values for health plans under PPACA range from 60 to 90 percent by metal level as follows: bronze (60 percent), silver (70 percent), gold (80 percent), or platinum (90 percent). Issuers may also offer “catastrophic” health plans to individuals under 30 and individuals exempt from the individual mandate. Catastrophic plans have actuarial values that are less than what is required to meet any of the other metal levels. While these plans are required to cover three primary care visits and preventive services at no cost prior to an enrollee reaching the plan’s deductible, they generally do not cover costs for other health care services until a high deductible is met. As of early January 2015, CMS has disbursed about $1.6 billion (64 percent) of the $2.4 billion in loans awarded to the 23 CO-OPs. Specifically, CMS has disbursed $351 million in start-up loans and $1.2 billion in solvency loans. The $351 million in start-up loan disbursements represents about 98 percent of the total amount of start-up loans awarded, whereas the $1.2 billion in solvency loan disbursements represents about 59 percent of the total amount of solvency loans awarded. (See fig. 2.) CMS has disbursed nearly all of the start-up loans awarded to the 23 CO-OPs. As figure 3 shows, disbursements to 11 CO-OPs equaled 100 percent of their start-up loan awards. Disbursements to the remaining 12 CO-OPs were more than 85 percent of awards. (See fig. 3.) In contrast to start-up loans, the percentage of awarded solvency loans disbursed to the 23 CO-OPs has varied depending on their needs. Disbursements to 3 of the 23 CO-OPs—CoOportunity Health, Kentucky Health Cooperative, Inc., and HealthyCT—equal 100 percent of their solvency loan awards. However, disbursements to 9 CO-OPs are less than 50 percent with disbursements to 2 CO-OPs—Land of Lincoln Health and Maine Community Health Options—being less than 30 percent of awards. The percentage of solvency loan funding disbursed to the remaining 11 CO-OPs ranged from 53 percent to 92 percent. (See fig. 4.) The percentages of solvency loan awards disbursed to CO-OPs reflect each CO-OP’s need for additional resources to meet state solvency and reserve requirements, which may be the result of enrollment growth or higher than anticipated claims from members. As of early January 2015, about $22 million of the CO-OP program appropriation was still available for obligation. According to CMS officials, the agency intends to use the remaining funds for administering the CO-OP program over the next few years. In the 22 states where CO-OPs offered health plans on the states’ health insurance exchanges in 2014, the average premiums for CO-OP health plans in all tiers were lower than the average premiums for other health plans in more than half of the rating areas. CO-OPs offered bronze, silver, and gold tier health plans in 91 percent of the rating areas, but offered catastrophic and platinum tier health plans in fewer rating areas. For 4 of 5 tiers, the average premiums for CO-OP health plans were lower than the average premiums for other health plans in 54 to 63 percent of rating areas where both a CO-OP and at least one other For platinum, the average premiums for issuer offered health plans.CO-OP health plans were lower than the average premium for other health plans in 89 percent of rating areas where CO-OPs and other issuers offered health plans. (See table 1.) The percentage of rating areas where the average premium for CO-OP health plans was lower than the average premium for other issuers varied significantly by each state and tier. (See fig. 5 for variation in silver plans and appendixes II through XXIII for more details on how CO-OPs in each state were priced in relation to other health plans.) For example, in six states the average premiums for CO-OP silver plans were higher than the average premiums for other silver plans in all the states’ rating areas where a CO-OP offered a plan. In five other states the average premiums for CO-OP silver plans were lower than the average premiums for other silver plans in all of the states’ rating areas where a CO-OP offered a plan. There was also variation across rating areas in the difference between the average premiums for CO-OP health plans and for other health plans. For example, for all rating areas in which CO-OPs offered silver tier health plans, average CO-OP premiums were priced between 10 and 30 percent lower in 31 percent of rating areas and between 10 and 30 percent higher in 21 percent of rating areas. CO-OP premiums were more than 30 percent lower in 4 percent of rating areas and more than 30 percent higher in 12 percent of rating areas. (See fig. 6.) The 22 CO-OPs that participated in the first open enrollment period enrolled over 469,000 (19 percent) of the nearly 2.5 million people who selected individual market health plans in states with CO-OPs. However, total enrollment was short of the overall projections of about 559,000 that CO-OPs included in their original loan applications and 8 CO-OPs accounted for about 385,000 (85 percent) of the total number of CO-OP enrollees. These 8 CO-OPs, in particular, exceeded their enrollment projections for the first open enrollment period with 5 more than doubling their projected enrollment. The remaining 14 CO-OPs did not meet their enrollment projections for the first enrollment period. And 10 of those CO-OPs enrolled less than half of their projected enrollment numbers. (See fig 7.) We provided a draft of this report to HHS for comment. In its written comments which appear in appendix XXIV, HHS stated its commitment to beneficiaries of the CO-OP program and taxpayers, and described various activities used to monitor the CO-OP program. The department provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dsouzav@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XXIV. Table 2 provides the total loan amounts awarded to each of the 23 consumer operated and oriented plans (CO-OP) as of January 2014. The consumer operated and oriented plans (CO-OP) in Arizona offered catastrophic, bronze, silver, and gold health plans in each of the state’s seven rating areas, but did not offer a platinum health plan. Figure 8 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in Arizona tended to be among the most expensive premiums. (See fig. 8.) The consumer operated and oriented plans (CO-OP) in Colorado offered catastrophic, bronze, silver, and gold health plans in each of the state’s 11 rating areas, but did not offer a platinum health plan. Figure 9 compares CO-OP premiums to those of all other plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OP in Colorado tended to be among the least expensive premiums. (See fig. 9.) The consumer operated and oriented plans (CO-OP) in Connecticut offered catastrophic, bronze, silver, and gold health plans in each of the state’s eight rating areas, but did not offer a platinum health plan. Figure 10 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OP in Connecticut were among the most expensive premiums for catastrophic and silver health plans. For bronze health plans, the CO-OP’s premiums tended to vary widely within each rating area, generally ranging from the most expensive premium to among the least expensive. The CO-OP’s premiums for gold health plans varied in relation to other issuers’ premiums across the rating areas. (See fig. 10.) The consumer operated and oriented plan (CO-OP) in Illinois offered bronze, silver, and gold health plans in each of the state’s 13 rating areas, but did not offer catastrophic or platinum health plans. Figure 11 compares CO-OP premiums to those of all plans. Specifically, the figures show the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in Illinois tended to be among the most expensive plans offered. However, in many rating areas, there was also a CO-OP offering that had a premium close to or below the middle. (See fig.11.) The consumer operated and oriented plan (CO-OP) in Iowa offered plans in all tiers in each of the state’s seven rating areas. Figure 12 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OP in Iowa varied widely, ranging from the least to most expensive. (See fig. 12.) The consumer operated and oriented plan (CO-OP) in Kentucky offered plans in all tiers in each of the state’s eight rating areas. Figure 13 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in Kentucky tended to be among the least expensive. However, in several rating areas, the CO-OP premiums were also among the most expensive premiums. (See fig. 13.) The consumer operated and oriented plan (CO-OP) in Louisiana offered plans in all tiers in each of the state’s eight rating areas. Figure 14 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in Louisiana varied depending on the rating area and tier, ranging from among the least to the most expensive. (See fig. 14.) The consumer operated and oriented plan (CO-OP) in Maine offered catastrophic, bronze, silver, and gold health plans in each of the state’s four rating areas, but did not offer a platinum health plan. Figure 15 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in Maine were generally among the least expensive premiums. In rating area 1, however, CO-OP premiums ranged from the least to most expensive for silver health plans and were among the most expensive for catastrophic and bronze health plans. (See fig. 15.) The consumer operated and oriented plan (CO-OP) in Maryland offered bronze, silver, and gold health plans in each of the state’s four rating areas, but did not offer catastrophic or platinum health plans. Figure 16 compares CO-OP premiums to those of all plans. Specifically, it shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OP in Maryland were among the most expensive premiums for bronze plans. However, the CO-OP premiums for silver and gold plans varied widely within each rating area, ranging from among the least to the most expensive premiums. (See fig. 16.) Appendix X: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Maryland Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Appendix X: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Maryland Rating area 1 includes Anne Arundel, Baltimore, Baltimore City, Harford, and Howard counties. Rating area 2 includes Calvert, Caroline, Cecil, Charles, Dorchester, Kent, Queen Anne’s, Somerset, St. Mary’s, Talbot, Wicomico, and Worcester counties. Rating area 3 includes Montgomery and Prince George’s counties. Rating area 4 includes Allegany, Carroll, Frederick, Garrett, and Washington counties. The consumer operated and oriented plan (CO-OP) in Massachusetts offered plans in all tiers in 5 of the state’s 7 rating areas. Figure 17 compares CO-OP premiums to those of all plans. Specifically, it shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OP in Massachusetts tended to be among the least expensive across all tiers and rating areas. (See fig. 17.) Appendix XI: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Massachusetts Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Appendix XI: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Massachusetts Rating area 1 includes 3-digit zip codes 010, 011, 012, and 013. Rating area 2 includes 3-digit zip codes 014, 015, and 016. Rating area 3 includes 3-digit zip codes 017 and 020. Rating area 4 includes 3-digit zip codes 018 and 019. Rating area 5 includes 3-digit zip codes 021, 022, and 024. Rating area 6 includes 3-digit zip codes 023 and 027. Rating area 7 includes zip codes that begin with 025 and 026. The consumer operated and oriented plan (CO-OP) in Michigan offered catastrophic, bronze, silver, and gold health plans in 13 of the state’s 16 rating areas, but did not offer a platinum health plan. Figure 18 compares CO-OP premiums to those of all plans. Specifically, it shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OP in Michigan tended to be the most expensive premiums. (See fig. 18.) Appendix XII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Michigan Appendix XII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Michigan Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Appendix XII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Michigan Rating area 1 includes Monroe and Wayne counties. Rating area 2 includes Macomb and Oakland counties. Rating area 3 includes St. Clair County. Rating area 4 includes Lenawee and Washtenaw counties. Rating area 5 includes Genesee and Shiawassee counties. Rating area 6 includes Tuscola County. Rating area 7 includes Ingham and Jackson counties. Rating area 8 includes Arenac, Bay, Gratiot, and Saginaw counties. Rating area 9 includes Cass and Van Buren counties. Rating area 10 includes Branch, Calhoun, and Kalamazoo counties. Rating area 11 includes Allegan and Barry counties. Rating area 12 includes Ionia, Kent, Lake, Mason, Mecosta, Montcalm, Muskegon, Newaygo, Oceana, Osceola, and Ottawa counties. Rating area 13 includes Clare, Gladwin, Isabella, and Midland counties. Rating area 14 includes Antrim, Benzie, Charlevoix, Emmet, Grand Traverse, Kalkaska, Leelanau, Manistee, Missaukee, and Wexford counties. Rating area 15 includes Alcona, Alpena, Cheboygan, Chippewa, Crawford, Iosco, Mackinac, Montmorency, Ogemaw, Oscoda, Otsego, Presque Isle, and Roscommon counties. Rating area 16 includes Alger, Baraga, Delta, Dickinson, Gogebic, Houghton, Iron, Keweenaw, Luce, Marquette, Menominee, Ontonagon, and Schoolcraft counties. The consumer operated and oriented plan (CO-OP) in Montana offered plans in all tiers in each of the state’s four rating areas. Figure 19 compares CO-OP premiums to those of all plans. Specifically, it shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OP in Montana were generally among the least expensive premiums for catastrophic and gold plans. CO-OP premiums for bronze and silver plans varied widely, ranging from among the least to most expensive. (See fig. 19.) Appendix XIII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Montana Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Appendix XIII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Montana Rating area 1 includes Carbon, Musselshell, Stillwater, Sweet Grass and Yellowstone counties. Rating area 2 includes Broadwater, Cascade, Chouteau, Clark, Deer Lodge, Gallatin, Judith Basin, Lewis and Jefferson, Silver Bow, and Teton counties. Rating area 3 includes Flathead, Lake, and Missoula counties. Rating area 4 includes Beaverhead, Big Horn, Blaine, Carter, Custer, Daniels, Dawson, Fallon, Fergus, Garfield, Glacier, Golden Valley, Granite, Hill, Liberty, Lincoln, Madison, McCone, Meagher, Mineral, Park, Petroleum, Phillips, Pondera, Powder River, Powell, Prairie, Ravalli, Richland, Roosevelt, Rosebud, Sanders, Sheridan, Toole, Treasure, Valley, Wheatland, and Wibaux counties. The consumer operated and oriented plan (CO-OP) in Nebraska offered plans in all tiers in each of the state’s four rating areas. Figure 20 compares CO-OP premiums to those of all plans. Specifically, it shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OPs tended to be among the least expensive premiums, with some exceptions such as catastrophic plans in rating area 1. (See fig. 20.) Appendix XIV: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Nebraska Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Appendix XIV: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Nebraska Rating area 1 includes 3-digit zip codes 680 and 681. Rating area 2 includes 3-digit zip codes 683, 684, and 685. Rating area 3 includes 3-digit zip codes 686, 687, 688, and 689. Rating area 4 includes 3-digit zip codes 690, 691, 692, and 693. The consumer operated and oriented plan (CO-OP) in Nevada offered catastrophic, bronze, silver, and gold health plans in each of the state’s four rating areas. Platinum plans were only offered in rating area 1. Figure 21 compares CO-OP premiums to those of all plans. Specifically, it shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in Nevada varied depending on the tier and rating area. In rating areas 2 and 3, CO-OP premiums were among the most expensive. (See fig. 21.) Appendix XV: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Nevada Appendix XV: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Nevada Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Rating area 1 includes Clark and Nye counties. Rating area 2 includes Washoe County. Rating area 3 includes Carson City, Douglas, Lyon, and Storey counties. Rating area 4 includes Churchill, Esmeralda, Eureka, Humboldt, Lander, Lincoln, Elko, Mineral, Pershing, and White Pine counties. The consumer operated and oriented plan (CO-OP) in New Jersey offered a health plan in all tiers in the state’s single rating area. Figure 22 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for the health plans offered by the CO-OP in New Jersey were among the more expensive premiums for bronze and silver health plans and in the middle for catastrophic and gold plans. CO-OP premiums for platinum health plans were among the least expensive. (See fig. 22.) Appendix XVI: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in New Jersey Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Appendix XVI: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in New Jersey Rating area 1 includes Atlantic, Bergen, Burlington, Camden, Cape May, Cumberland, Essex, Gloucester, Hudson, Hunterdon, Mercer, Middlesex, Monmouth, Morris, Ocean, Passaic, Salem, Somerset, Sussex, Union, and Warren counties. The consumer operated and oriented plan (CO-OP) in New Mexico offered catastrophic, bronze, silver, and gold health plans in each of the state’s five rating areas, but did not offer a platinum health plan. Figure 23 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in New Mexico were often among the less expensive premiums for bronze, silver, and gold health plans and were generally in the middle for catastrophic plans. (See fig. 23.) Appendix XVII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in New Mexico Appendix XVII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in New Mexico Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Rating area 1 includes Bernalillo, Sandoval, Torrance, and Valencia counties. Rating area 2 includes San Juan County. Rating area 3 includes Don Ana County. Rating area 4 includes Santa Fe County. Rating area 5 includes Catron, Chaves, Cibola, Colfax, Curry, DeBaca, Eddy, Grant, Guadalupe, Harding, Hidalgo, Lea, Lincoln, Los Alamos, Luna, McKinley, Mora, Otero, Quay, Rio Arriba, Roosevelt, San Miguel, Sierra, Socorro, Taos, and Union counties. The consumer operated and oriented plan (CO-OP) in New York offered bronze, silver, gold, and platinum health plans in each of the state’s eight rating areas, but did not offer a catastrophic health plan. Figure 24 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP were consistently among the least expensive premiums across all eight rating areas in New York. (See fig. 24.) Appendix XVIII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in New York Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Appendix XVIII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in New York Rating area 1 includes Albany, Columbia, Fulton, Greene, Montgomery, Renesselaer, Saratoga, Schenectady, Schoharie, Warren, and Washington counties. Rating area 2 includes Allegany, Cattaraugus, Chautauqua, Erie, Genesee, Niagara, Orleans, and Wyoming counties. Rating area 3 includes Delaware, Dutchess, Orange, Putnam, Sullivan, and Ulster counties. Rating area 4 includes Bronx, Queens, Kings, New York, Richmond, Rockland, and Westchester counties. Rating area 5 includes Livingston, Monroe, Ontario, Seneca, Wayne, and Yates counties. Rating area 6 includes Broome, Cayuga, Chemung, Cortland, Onondaga, Schuyler, Steuben, Tioga, and Tompkins counties. Rating area 7 includes Chenango, Clinton, Essex, Franklin, Hamilton, Herkimer, Jefferson, Lewis, Madison, Oneida, Oswego, Otsego, and St. Lawrence counties. Rating area 8 includes Nassau and Suffolk counties. The two consumer operated and oriented plans (CO-OP) in Oregon offered catastrophic, bronze, silver, and gold health plans in each of the state’s seven rating areas, but only platinum health plans in four rating areas. Figure 25 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OPs tended to vary widely, ranging from among the least to the most expensive premiums. (See fig. 25.) Appendix XIX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Oregon Appendix XIX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Oregon Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Rating area 1 includes Clackamas, Multnomah, Washington, and Yamhill counties. Rating area 2 includes Benton, Lane, and Linn counties. Rating area 3 includes Marion and Polk counties. Rating area 4 includes Deschutes, Klamath, and Lake counties. Rating area 5 includes Columbia, Coos, Curry, Lincoln, and Tillamook counties. Rating area 6 includes Crook, Gilliam, Grant, Harney, Hood River, Jefferson, Malheur, Morrow, Sherman, Umatilla, Union, Wallowa, Wasco, and Wheeler counties. Rating area 7 includes Douglas, Jackson, and Josephine counties. The consumer operated and oriented plan (CO-OP) in South Carolina offered catastrophic, bronze, silver, and gold health plans in each of the state’s 46 rating areas, but did not offer a platinum health plan. Figure 26 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in South Carolina tended to be among the least expensive premiums. However, CO-OP premiums were among the most expensive premiums in some rating areas. (See fig. 26.) Appendix XX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in South Carolina Appendix XX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in South Carolina Appendix XX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in South Carolina Appendix XX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in South Carolina Appendix XX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in South Carolina Appendix XX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in South Carolina Appendix XX: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in South Carolina Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. The consumer operated and oriented plan (CO-OP) in Tennessee offered catastrophic, bronze, silver, and gold health plans in five of the state’s eight rating areas, but did not offer a platinum health plan. Figure 27 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in Tennessee tended to be among the most expensive premiums. (See fig. 27.) Appendix XXI: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Tennessee Appendix XXI: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Tennessee Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Rating area 1 includes Carter, Greene, Hancock, Hawkins, Johnson, Sullivan, Unicoi, and Washington counties. Rating area 2 includes Anderson, Blount, Campbell, Claiborne, Cocke, Grainger, Hamblen, Jefferson, Knox, Loudon, Monroe, Morgan, Roane, Scott, Sevier, and Union counties. Rating area 3 includes Bledsoe, Bradley, Franklin, Grundy, Hamilton, Marion, McMinn, Meigs, Polk, Rhea, and Sequatchie counties. Rating area 4 includes Davidson, Montgomery, Robertson, Rutherford, Sumner, Trousdale, Williamson, and Wilson counties. Rating area 5 includes Benton, Carroll, Chester, Crockett, Decatur, Dyer, Gibson, Hardeman, Hardin, Henderson, Henry, Lake, Madison, McNairy, Obion, and Weakley counties. Rating area 6 includes Fayette, Haywood, Lauderdale, Shelby, and Tipton counties. Rating area 7 includes Cannon, Clay, Cumberland, DeKalb, Fentress, Jackson, Macon, Overton, Pickett, Putnam, Smith, Van Buren, Warren, and White counties. Rating area 8 includes Coffee, Dickson, Giles, Hickman, Houston, Humphreys, Lawrence, Lewis, Lincoln, Marshall, Maury, Moore, Perry, Stewart, and Wayne counties. The consumer operated and oriented plan (CO-OP) in Utah offered bronze, silver, and gold health plans in each of the state’s six rating areas, but did not offer a catastrophic or platinum health plan. Figure 28 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank- ordering all plans. The premiums for health plans offered by the CO-OP in Utah were among the least expensive premiums in three of the state’s six rating areas and tended to be in the middle for the other rating areas. (See fig. 28.) Appendix XXII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Utah Appendix XXII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Utah Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Rating area 1 includes Cache and Rich counties. Rating area 2 includes Box Elder, Davis, Morgan, Salt Lake, Summit, and Weber counties. Rating area 3 includes Tooele and Wasatch counties. Rating area 4 includes Utah County. Rating area 5 includes Iron and Washington counties. Rating area 6 includes Duchesne, Emery, Garfield, Grand, Juab, Kane, Millard, Piute, San Juan, Sanpete, Sevier, Uintah, and Wayne counties. The consumer operated and oriented plan (CO-OP) in Wisconsin offered catastrophic, bronze, silver, and gold health plans in six of the state’s 16 rating areas, but did not offer a platinum health plan. Figure 29 compares CO-OP premiums to those of all plans. Specifically, the figure shows the percentile range in which CO-OP premiums fell after rank-ordering all plans. The premiums for health plans offered by the CO-OP in Wisconsin tend be among the less expensive premiums. However, CO-OP premiums varied widely for silver health plans in rating areas 1, 9, and 12, ranging from among the least to the most expensive. (See fig. 29.) Appendix XXIII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Wisconsin Appendix XXIII: Consumer Operated and Oriented Plan Premiums Relative to Other Issuers’ Premiums in Wisconsin Plans in the same metal level have the same actuarial value. Catastrophic plans are not required to meet actuarial value targets, but must have actuarial values less than 60 percent. Rating area 1 includes Milwaukee County. Rating area 2 includes Dane County. Rating area 3 includes Polk, Pierce, and St. Croix counties. Rating area 4 includes Chippewa, Dunn, Eau Claire, and Pepin counties. Rating area 5 includes Ashland, Bayfield, Burnett, Douglas, Sawyer, and Washburn counties. Rating area 6 includes Buffalo, Jackson La Crosse, Monroe, and Trempealeau counties. Rating area 7 includes Crawford, Grand, Iowa, LaFayette, and Vernon counties. Rating area 8 includes Clark, Price, Rusk, and Taylor counties. Rating area 9 includes Racine and Kenosha counties. Rating area 10 includes Lincoln, Marathon, Portage, and Rusk counties. Rating area 11 includes Calumet, Dodge, Fond du Lac, Sheboygan, and Winnebago counties. Rating area 12 includes Ozaukee, Washington, and Waukesha counties. Rating area 13 includes Florence, Forest, Iron, Langlade, Oneida, and Vilas counties. Rating area 14 includes Columbia, Green, Jefferson, Rock, and Walworth counties. Rating area 15 includes Adams, Green Lake, Juneau, Marquette, Richland, and Sauk counties. Rating area 16 includes Brown, Door, Kewaunee, Manitowoc, Menominee, Oconto, and Shawano counties. Vijay D’Souza, (202) 512-7114 or DsouzaV@gao.gov. In addition to the contact named above, Robert Copeland, Assistant Director; Dee Abasute; Sandra George; Giselle Hicks; Aaron Holling; Drew Long; and Christina Serna made key contributions to this report.
The Patient Protection and Affordable Care Act (PPACA) established the CO-OP loan program, which helped create 23 consumer-governed, nonprofit health insurance issuers known as CO-OPs. To foster the creation of the CO-OPs, PPACA authorized two types of loans: (1) start-up loans, which help cover the costs of establishing a CO-OP; and (2) solvency loans, which help meet states' solvency requirements associated with becoming a licensed health insurance issuer. While the program seeks to increase competition and improve accountability to members, questions have been raised about the effects CO-OPs will have on health insurance markets. GAO was asked to study the CO-OP program during 2014. This report examines (1) the status of the CO-OP program loans, (2) how CO-OP health plan premiums compare to the premiums of other health plans, and (3) enrollment in CO-OP health plans. GAO analyzed data from CMS and states; reviewed applicable statutes, regulations, guidance, and other documentation; and interviewed officials from CMS and seven CO-OPs that were selected based on the total amount of loans awarded, geographic region, and the type of health insurance exchange (i.e., federally facilitated or state-based exchange) operated in the state where the CO-OP offered health plans. In commenting on a draft of this report, the Department of Health and Human Services described activities used to monitor the CO-OP program and provided technical comments, which were incorporated as appropriate. As of January 2015, the Centers for Medicare & Medicaid Services (CMS)—the agency that administers and monitors the consumer operated and oriented plan (CO-OP) program—has disbursed about two thirds of the $2.4 billion in loans awarded to 23 CO-OPs. CMS has disbursed about $351 million in start-up loans and $1.2 billion in solvency loans. The percentage of start-up loan funding disbursed to CO-OPs equaled all, or nearly all, of their awards. However, the percentage of solvency loan funding disbursed varied depending on each CO-OP's need to meet state solvency and reserve requirements. Disbursements to three CO-OPs equaled 100 percent of their solvency awards, while disbursements to 20 other CO-OPs ranged from 26 to 92 percent of their awards. The average premiums for CO-OP health plans were lower than those for other issuers in more than half of the rating areas—geographical areas established by states and used, in part, by issuers to set premium rates—for the 22 states where CO-OPs participated in the exchange during 2014. As shown in the table below, for four of the five coverage tiers—standardized levels of coverage based on the portion of health care costs expected to be paid by the health plan—the average premiums for CO-OP health plans were lower than the average premiums for other health plans in 54 to 63 percent of these rating areas. In addition, there was variation across rating areas in the difference between the average premiums for CO-OPs and other plans. For example, average CO-OP premiums for silver health plans were priced between 0 and 10 percent lower in 20 percent of rating areas and between 10 and 30 percent lower in 31 percent. During the first open enrollment period (October 1, 2013, through March 31, 2014), the 22 participating CO-OPs enrolled over 470,000 people. However, the total was short of the overall projections included in the CO-OPs' original loan applications, and 8 of the 22 CO-OPs accounted for more than 85 percent of the total number of CO-OP enrollees. These 8 CO-OPs exceeded their enrollment projections with 5 more than doubling their projected enrollment. The remaining 14 did not meet their enrollment projections. Ten of those CO-OPs enrolled less than half of their projected enrollment numbers. Officials from the CO-OPs GAO interviewed cited relatively high premiums, for example, as a reason for lower than projected enrollment levels.
Beginning in 1993, both Congress and the administration agreed that federal employment levels should be cut as a means of reducing federal costs and controlling deficits. Through a series of executive orders and legislation, goals were established for reducing federal staffing levels. Two driving forces in the reductions were the Federal Workforce Restructuring Act of 1994 and the National Performance Review. The act, passed in March 1994, mandated governmentwide reductions of 272,900 FTE positions through fiscal year 1999. The National Performance Review, the administration’s major management reform initiative, recommended that any reductions be accomplished through agency efforts to streamline operations, reduce management control and headquarters positions, and improve government operations through reinvention and quality management techniques. In addition to reducing their workforces and streamlining their operations, agencies are required to measure their performance. The Government Performance and Results Act of 1993 requires agencies to (1) develop strategic plans covering a period of at least 5 years and submit the first of these plans to Congress and OMB by the end of fiscal year 1997, (2) develop and submit annual performance plans to OMB and Congress beginning for fiscal year 1999 containing the agencies’ annual performance goals and the measures they will use to gauge progress toward achieving the goals, and (3) submit annual reports on program performance for the previous fiscal year to Congress and the President beginning with fiscal year 2000. In addition, the Results Act established requirements for pilot projects so that participating agencies could gain experience in using key provisions of the Results Act and provide lessons for other agencies as well. Over 70 federal organizations, including GSA, HUD, and OPM, participated in the pilot projects for performance planning and reporting. Between fiscal years 1993 and 1996, when the federal civilian workforce was cut by about 12 percent, the workforces of certain agencies were reduced by larger percentages. These included cuts of 14 percent at HUD, 13 percent at DOI, 22 percent at GSA, 13 percent at NASA, and 42 percent at OPM. To determine which components within HUD, DOI, GSA, NASA, and OPM were downsized and to what extent, we examined agency FTE data for fiscal years 1993 through 1996. These data were organized by component. To address our remaining objectives, we selected one component from each agency, primarily on the basis of the percentage it downsized; however, we also considered such factors as public interest, the effects of downsizing on safety, and privatization of agency functions as selection criteria. We selected HUD’s Office of Housing and GSA’s PBS because, on a percentage basis, they accounted for the largest portion of their parent agencies’ staffing reductions. We selected DOI’s BOR because it was one of the most heavily downsized of DOI’s components, and within BOR, we focused on the Denver Reclamation Service Center because of its central role in BOR operations. Within NASA, the Human Space Flight Program experienced the greatest percentage of downsizing, and from the program’s centers, we focused on KSC because of its high profile as the space shuttle launch and recovery site and because of public concerns that had been expressed about shuttle safety. We selected OPM’s Investigations Service because many of its functions had been privatized. To determine what actions were taken to maintain performance in the selected components as a result of downsizing, the results of these actions on performance, the effects of downsizing on customer satisfaction, and lessons learned, we interviewed officials from the parent agencies, components, unions, and employee associations. We also interviewed a small number of randomly selected BOR and KSC employees who were not represented by unions to obtain their views on agency performance during downsizing. We reviewed streamlining, performance, and customer service plans; where available, we examined performance and customer satisfaction measurement data. We did not evaluate the performance or customer satisfaction measures used by the components or verify their performance measurement or customer satisfaction scores. Because of limited customer satisfaction data at the Office of Housing, BOR, and the Investigations Service, we interviewed a small number of randomly selected customers to determine their satisfaction with performance during downsizing. The lessons learned by components reflect the judgment of component officials. We did not independently assess how well these lessons were followed during components’ actual downsizing experiences. The results of our work are limited to the components reviewed and cannot be projected to the entire agency or governmentwide. Our work was performed at the headquarters of the parent agencies and components in Washington, D.C.; at the KSC in Florida; and at BOR’s Reclamation Service Center in Denver, CO. We also interviewed BOR employees, KSC employees, HUD customers, BOR customers, and OPM customers in various locations throughout the United States. We performed our work between October 1996 and November 1997 in accordance with generally accepted government auditing standards. We asked HUD, DOI, GSA, NASA, and OPM to provide comments on a draft of this report. The comments provided are discussed at the end of this letter. Nearly all organizational components in each agency were affected, some more than others. The downsizing of components at the five parent agencies we reviewed ranged from around 2 percent to 100 percent. In addition, the effect of each component’s downsizing on the parent agency’s total reductions varied. For example, HUD’s Office of Housing’s FTE reductions between fiscal years 1993 and 1996 were 52 percent of HUD’s total reductions, while BOR’s FTE reductions were 15 percent of DOI’s total reductions for the period. The extent of agency downsizing by selected organizational component is shown in table 1. Although officials told us it was difficult to isolate actions that agencies and their components took to maintain performance independently of downsizing from those taken because of downsizing, the actions they said were taken to maintain performance amid downsizing fell into three categories: refocusing their missions, reengineering their work processes, and taking steps to build and maintain employee skills. Detailed information on each component is provided in appendixes I through V. The National Performance Review, budget reductions, and workforce reductions generally have led federal agencies to rethink how they operate and work to reinvent themselves to become more efficient organizations. According to component officials, most of the five components, under the guidance of their parent agencies, refocused their missions primarily to increase their efficiency. For example, BOR changed its emphasis from water project construction to water resources management because of the increased demand on limited water resources and cutbacks in federal spending. NASA’s Human Space Flight program shifted its focus and scarce resources from operations—which it believed could be conducted more efficiently by private vendors—to its primary mission, research and development. OPM created US Investigations Services (USIS), Inc., to do the background investigations work OPM’s Investigations Service previously provided to other agencies. In addition to refocusing missions, components reengineered their work processes to improve effectiveness and/or efficiency. Changes to work processes included consolidating functions into fewer locations, aligning operations more closely with private sector business practices, modernizing data processing systems, placing increased decisionmaking authority in field offices, and increasing reliance on contractors. HUD’s Office of Housing, for example, consolidated single family housing activities from 17 field offices into 1 homeownership center, which officials said helped reduce processing times. The Office plans further consolidations by the year 2000. KSC changed from its traditional contractor oversight role to one of “insight.” Under oversight, KSC directly oversaw contractors on a continual basis, but under insight, KSC will directly oversee contractor processes on a periodic basis. Another component, OPM’s Investigations Service, privatized its investigations operations through the establishment of a private corporation owned by former Investigations Services employees under an Employee Stock Ownership Plan. An Investigations Service official said that USIS completed about 20 percent more investigations in fiscal year 1997 than the Investigations Service did in fiscal year 1996. Along with reengineering their work processes, components generally took steps to help ensure that they had the skilled workforces needed to maintain their performance in a downsized environment. These steps included retraining employees for additional responsibilities and consolidating expertise in fewer locations. For example, according to a PBS official, PBS lacked a workforce suited to its mission; however, it was training its staff to develop a workforce with the necessary skills. BOR officials said that although their workforce had retained the appropriate skills and experience, employees were being retrained and rotated among functions to develop future supervisors and managers. Nevertheless, some officials were concerned about the sufficiency of current or future workforces for some components. In March 1997, the HUD Inspector General (IG) reported that the Office of Housing did not have the staffing levels and skill mixes it needed. The IG also reported staffing shortages in some areas, barriers to effective staff redeployment, and mismatches between skills and needs. The report stated that staff reductions would be compounded as anticipated budget restrictions led to further reductions by the end of fiscal year 2000. The report also said staffing needs continued to be most critical in the multifamily insured portfolio monitoring area and, to a lesser degree, in the multifamily note servicing area. The IG said this prevented the component from placing adequate resources on multifamily loss mitigation functions and properly managing troubled multifamily assets. In October 1997, HUD began implementing its 2020 Management Reform Plan, which included a specific initiative to refocus HUD’s mission and retrain its workforce to perform a wider variety of interdisciplinary tasks. Office of Housing officials reported to us that one expected effect of the HUD 2020 Management Reforms will be that Housing will be able to focus a highly trained staff with adequate automated systems on the multifamily portfolio. Also, because of concern about the safety of the space shuttle as KSC downsized and a new contract for shuttle operations was implemented, NASA’s Aerospace Safety Advisory Panel reviewed issues associated with program safety and management. It found that, overall, efforts to streamline the space shuttle program had not created unacceptable risks, but it was concerned with the long-term loss of critical skills and experience. The panel said these personnel issues were challenging and had the potential to adversely affect risk in the future. Component officials, employee representatives, and employees we spoke with believed that efforts to maintain performance had generally been successful. However, some expressed concern about whether performance could be maintained with additional downsizing. They also largely believed that their customers remained satisfied, a view generally supported by the limited customer survey data available; however, customers we spoke with did not always agree with that assessment. Officials, employee representatives, and employees we spoke with at all five components said that they generally believed performance had been maintained; however, some officials expressed concern about whether performance could be maintained with additional downsizing. Office of Housing officials, for example, believed downsizing had not greatly affected the Office’s performance. Further, they reported to us that they anticipate the component’s performance would not only be maintained but would improve after the additional downsizing called for by HUD’s management reform plan is completed in the year 2000. Office of Housing union representatives had mixed opinions, with one agreeing with the Housing officials that there were few performance problems to date and another believing that performance had been negatively affected. KSC officials said they believed KSC was still able to perform its mission. However, they also said they were concerned about retaining the human resources needed to react to problems, meet unplanned requirements, and sustain work as the workforce continued to decline. Most KSC employees we spoke with supported their management’s view. They said mission performance had been unaffected by downsizing but that it could be affected by future downsizing. We found limited performance measurement baseline or trend data to validate the belief of component officials and employees that performance had been maintained. However, the data that were available showed that the components generally met performance goals they set for themselves. For example, at the time of our review, PBS had been developing performance measures under a Government Performance and Results Act pilot project, and while there were little trend data, the data that existed showed that PBS met or exceeded more than half of the goals it set for itself during downsizing. At KSC, available performance measurement data indicated that KSC had maintained performance during downsizing. The data showed that KSC had maintained its shuttle launch schedule at lower cost and that the number of in-flight problems caused by ground processing had declined. BOR officials were unable to provide any BOR-wide performance measurement data to use in corroborating officials’ views that performance had been maintained. The Results Act requires that agencies collect performance measurement data for managing their programs, and component officials told us they are currently developing these data. Officials, employee representatives, and employees we spoke with at all five components largely believed that their customers remained satisfied even as the organizations took action to maintain their performance during downsizing. This view was generally supported by the limited customer survey data available. However, customers of the Office of Housing and BOR that we spoke with did not always agree. PBS and KSC officials cited customer survey results that supported their positive views of customer satisfaction during downsizing. PBS reported surveys of its buildings’ tenants showed satisfaction increasing from 74 percent to 77 percent between fiscal years 1993 and 1996. KSC reported that its payload customers’ satisfaction remained at about 4.2 on a scale of 1 to 5 with 5 being excellent service, despite downsizing, during fiscal years 1993 through 1996. Because BOR, the Office of Housing, and the Investigations Service had little customer satisfaction data to support their opinions, we interviewed a small number of their customers. We interviewed seven randomly selected BOR customers consisting of six water districts located in rural areas in the western United States and one state agency. One customer was satisfied with BOR’s performance, five were not satisfied, and one reported declining satisfaction. The most common reason cited for dissatisfaction was the view that BOR more often favored the water demands of politically powerful groups at the expense of rural farmers. However, none of the customers we talked to blamed their dissatisfaction on downsizing. Of the five Office of Housing customers we interviewed, three either were not satisfied or had mixed feelings with Housing’s performance. All three said downsizing caused major losses of staff with adequate technical expertise. The two Investigations Service customers we spoke with said there had been no change in their satisfaction level since the Service had privatized. Although officials from the components identified a number of lessons that they said helped them maintain performance during downsizing, most cited two overarching lessons. They believed that open lines of communication between management and employees were a must and that management must solicit employee input into the planning process. NASA officials told us that unions, employee associations, and employees should be involved with developing the agency downsizing implementation strategy. In addition, officials said that (1) people must be treated with compassion and must know they are valued by the agency; (2) there must be no favoritism even though management may be reluctant to let some people leave; (3) buyouts need to be planned to prevent a sudden loss of expertise; and (4) critical skills should be backed up by more than one person so that, if people leave, the agency still has employees with the required skills. We requested comments on a draft of this report from the heads of each of the five agencies or their designees from which we had obtained information. We received written comments from NASA in a letter dated January 22, 1998, from the Acting Deputy Administrator. The Acting Deputy Administrator had no comments on any of the substantive content of the draft report. However, he did suggest one technical change, which we have made in the report. See appendix VI for a reprint of NASA’s letter. We requested comments from the Administrator, GSA, but despite several follow-up inquiries, no comments were received. On January 28 and 29, 1998, we spoke with the GAO Liaisons at OPM, DOI, and HUD, respectively. The OPM GAO Liaison said that OPM had no substantive comments on the draft report. He suggested several technical comments to improve the accuracy or context in the draft report; we made these changes in this report where appropriate. The DOI GAO Liaison had no comments on the draft report. The HUD GAO Liaison told us that except for one statement attributed to Office of Housing officials that the Department cannot support, the agency had no comments on the draft report. Consequently, we deleted the sentence from this report. As arranged with your office, unless you announce the contents of this report earlier, we plan no further distribution until 30 days after its issue date. At that time, we will send copies to the Ranking Minority Member of the Subcommittee on Civil Service, House Committee on Government Reform and Oversight, and to the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs. We will also send copies to the Secretaries of HUD and DOI, the Administrators of GSA and NASA, and the Director of OPM. We will make copies available to others on request. The major contributors to this report are listed in appendix VII. If you have any questions about the report, please call me on (202) 512-8676. The Department of Housing and Urban Development (HUD) reduced its workforce by 1,894 FTEs between fiscal years 1993 and 1996. As shown in Table I.1, the Office of Housing accounted for the largest percentage of HUD’s downsizing. Actions taken that helped HUD’s Office of Housing maintain performance during downsizing can be categorized into three general areas: (1) HUD refocused its mission, (2) Office of Housing reengineered its work processes, and (3) Office of Housing took steps to build and maintain employee skills. HUD, according to its streamlining plan, had operated for years without a clear mission, resulting in an inability to mobilize its resources to meet the needs of America’s communities. In a 1996 statement highlighting the agency’s reinvention efforts, the Secretary of HUD stated that HUD’s mission is to help people create communities of opportunities and that the programs and resources of HUD help Americans create cohesive, economically healthy communities. HUD’s Office of Housing has responsibility for (1) underwriting single family, multifamily, property improvement, and manufactured home loans and (2) administering special purpose programs designed specifically for the elderly, the handicapped, and the chronically mentally ill. In addition, the Office of Housing administers assisted-housing programs for low-income families, administers grants to fund resident ownership of multifamily housing properties development, and protects consumers against fraudulent practices of land developers and promoters. In support of its mission, HUD officials said that Office of Housing took or planned a number of actions to help maintain performance during this period of downsizing. Routine, location neutral activities were consolidated into fewer offices. In August 1994, the Office of Housing consolidated single family housing activities from 17 field offices into the Denver Homeownership Center and reported reduced processing times as a result. By 2000, Housing plans to consolidate remaining single family loan processing, quality assurance, marketing and outreach, and asset management activities from its 81 field offices into 4 homeownership centers (including Denver), which officials said should enable them to reduce single family personnel by 50 percent. The multifamily housing program consolidated voucher processing in Kansas City in August 1995, property disposition in Atlanta and Fort Worth in October 1996, and risk-sharing lender activities in Greensboro, NC, in January 1997 to reduce processing time, improve customer service, and use staff resources more efficiently. Multifamily housing officials planned to continue consolidating its 51 hub locations until activities are located in 18 hub locations and 33 additional smaller sites by fiscal year 1998. According to these officials, this will create economies of scale and maximize use of limited resources while still maintaining a local presence. They explained these consolidations were not done specifically because of downsizing, but they were part of an ongoing HUD reinvention effort, which permitted HUD to adjust to a fluctuating workload and maintain performance during downsizing. In addition to the consolidations, the Office of Housing began implementing paperless processing of mortgage record changes, default reporting, and other record changes. Office of Housing officials told us their employees generally had the appropriate skills and experience to maintain performance during this downsizing period. However, in certain instances, the Office of Housing used contractors to supplement shrinking staff and provide technical expertise, such as physical inspections and property disposition rehabilitation reviews. The various consolidations reduced the need to have expertise in all functions in all offices. To augment employee skills, the multifamily housing program implemented work sharing using a “matrix” scheme of 5 teams consisting of 18 to 20 offices each. Under this scheme, offices within a matrix shared work so that, if an office needed help with a function, it could get it from another office in the team. Union officials we spoke with also believed that Office of Housing employees had the skills and experience needed to maintain performance in most but not all locations; however, they warned that the skills would not be available as downsizing continued. Although Office of Housing and union officials believed Housing had the skills and experience necessary to maintain performance, a March 1997 IG’s audit was less optimistic. It found staffing shortages in some areas, barriers to effective staff redeployment, and mismatches between skills and needs. The IG report stated that staff reductions would be compounded as anticipated budget restrictions led to further reductions by the end of fiscal year 2000. The report said staffing needs continued to be most critical in the multifamily insured portfolio monitoring area and, to a lesser degree, in the multifamily note servicing area. Office of Housing officials reported to us that the realignment of functions and responsibilities as outlined in HUD’s 2020 Management Reform Plan, initiated in October 1997, will enable Housing to focus a highly trained staff with adequate automated systems on the multifamily portfolio. The Office of Housing reported that the creation of new methods to deal with its workload, such as the single family homeownership centers and work sharing, had allowed it to maintain, and in some cases, improve performance. Union officials differed on whether downsizing had affected performance. Two union officials thought performance had been negatively affected, while one union official said it had not been because employees took pride in their work and were willing to do what was necessary to get it done. Office of Housing performance measurement goals changed from year to year so there were few trend data, but the available data showed that Housing generally met or exceeded the goals it set for itself during this downsizing period. Trend data for one goal, to close sales on 95 percent of each year’s single family inventory, were available. They showed closed sales were 95 percent of inventory in fiscal year 1994, 109 percent in fiscal year 1995, and 194 percent in fiscal year 1996. Housing officials told us that HUD was developing performance measures in compliance with the Government Performance and Results Act. HUD’s 2020 Management Reform Plan seeks to (1) consolidate most of its recordkeeping and many program activities in selected cities around the country and (2) focus the agency on assessing the quality of the government housing stock, and curtailing waste, fraud, and abuse. As part of this plan, HUD would continue its downsizing efforts. In a November 25, 1997, audit-related memorandum providing for an interim review of HUD’s reform plan, HUD’s IG criticized the plan for setting a downsizing target without first analyzing HUD’s workload and mission. The IG reported that HUD’s staff reductions are resulting in a serious loss of technical expertise leading to concerns about the relative capacity of HUD’s remaining staff to carry out their mission and responsibilities once reforms are in place. HUD officials had not yet responded to the audit-related memorandum at the time we concluded our work. Office of Housing officials, citing feedback from lending institutions and a decreased number of complaints, believed customers were satisfied with their performance. A union official believed that customer satisfaction on the part of the private real estate industry had increased because private companies were asked to do more for themselves, which they applaud, but satisfaction on the part of the public had decreased because downsizing had reduced the opportunities for the public to interact with the Office of Housing. Another union official believed that customers had generally remained satisfied in spite of downsizing because the extra time employees were devoting to their jobs enabled the Office of Housing to continue providing levels of service after downsizing that were comparable to those provided before downsizing. The Office of Housing provided results of two customer satisfaction surveys done during downsizing. A Denver Single Family Processing Center customer survey in 1995 with an 8 percent response rate indicated that the respondents were satisfied. A 1996 survey found moderate satisfaction among lenders and low satisfaction among realtors for the Section 203(k) Rehabilitation Mortgage Insurance Program, and it found high satisfaction among lenders and moderate satisfaction among realtors for the Section 203(b) Mortgage Insurance Program. However, in the absence of any similar surveys prior to downsizing, we could not tell if satisfaction among these customers had increased or decreased during downsizing. Further, the low response rates for the surveys undermine their value as accurate measures of customer satisfaction. In the absence of agency data measuring changes in customer satisfaction during downsizing, we interviewed a small number of Office of Housing customers. The Office of Housing provided customer lists containing 80 customers composed primarily of nonprofit organizations representing industry groups and homeowners. From the 80 customers, we randomly selected 10. We asked them if their satisfaction with the Office of Housing’s performance had changed since 1992 and if their satisfaction had been affected by downsizing. Three of the organizations denied being customers, one could not be contacted, and one did not respond to our questions. Of the remaining five, two were satisfied with the Office of Housing’s performance, but three were either dissatisfied or had mixed feelings. The dissatisfaction all three expressed was due to major losses, at headquarters or field offices, of staff with adequate technical expertise, and all three blamed downsizing. The organizations said these losses made it difficult for the organizations and their constituents to obtain information they needed. One organization described the situation at the Office of Housing as a “brain drain.” Office of Housing officials identified a number of lessons learned they believed helped maintain performance during downsizing. They said agencies should involve employees who will be affected by downsizing in the planning and development of new organizational procedures. They said managers need to “be straight” with employees about what is happening because it makes acceptance easier; tell employees the situation as soon as possible so they can make decisions about their futures; not change direction after the inevitable is accepted because that causes downtime while employees become reoriented; and make every effort to convey to the employees how important they are to the agency’s success and to ensure that the employees feel they are part of a team. Officials also said it is important to develop a cooperative relationship with employee unions. The Department of the Interior (DOI) reduced its workforce by almost 10,200 FTEs between fiscal years 1993 and 1996. Table II.1 shows components with the largest downsizing percentages. Actions taken that helped Bureau of Reclamation (BOR) maintain performance amid downsizing can be categorized into three general areas: BOR (1) refocused its mission, (2) reengineered its work processes, and (3) took steps to build and maintain employee skills. According to the Secretary of the Interior, his agency’s mission is to protect and provide access to the nation’s natural and cultural heritage and to honor its trust responsibilities to tribes. DOI’s internal operating manual states that BOR’s mission is to manage, develop, and protect water and related resources in an environmentally and economically sound manner in the interest of the American public. In fulfilling its mission, BOR designs and constructs water resources projects; develops and enhances recreational uses at BOR projects; conducts research and encourages technology transfer to improve resource management development and protection; assists other federal and state agencies in protecting and restoring surface water and ground water resources from hazardous waste contamination; and provides engineering and technical support to federal and state agencies, Native American tribes, and other nations. Over the past decade, BOR shifted its mission emphasis from water project construction to water resources management, including water conservation, environmental restoration, and solutions to the water problems of Native Americans and urban water suppliers. According to BOR officials, this reemphasis occurred at the same time as downsizing, but not because of downsizing. In October 1994, BOR reengineered its Denver facilities into the Reclamation Service Center to provide administrative, research, scientific, and technical services to BOR, other DOI organizational components, water districts, and others. These services are provided through four major units, specifically, the Administrative Service Center, the Human Resources Office, the Management Services Office, and the Technical Service Center. As part of the restructuring, the Technical Service Center became self-supporting—dependent on client payments for its financing. In addition to establishing the Reclamation Service Center, BOR’s 35 project offices were consolidated into 26 area offices. BOR officials believed that BOR employees had the appropriate skills and experience to maintain performance amid downsizing, although they also believed additional younger people needed to be hired. To develop a cadre of people to be future supervisors and managers, the officials said BOR was rotating people among functions and retraining them. A union official also believed that BOR had the appropriate skills and experience to maintain acceptable performance with the workforce currently on board. Employees we spoke with generally agreed that BOR had the necessary skills, but they were concerned about the future. One employee said that skills were thinly spread, and although the work would get done, its quality might suffer. Another employee said there were skill gaps, and unless BOR was careful, it would not have the skills needed. Some employees also expressed concern that employees were leaving who would have been BOR’s future leaders and that few young people were being hired. BOR headquarters officials believed no performance problems had emerged because of downsizing; however, Reclamation Service Center officials were less positive. While Service Center officials generally agreed performance had not suffered greatly, they also noted that some problems had emerged, particularly in the Service Center’s ability to provide computer support to other BOR units. Service Center officials believed that people were working harder and were tired because of fewer people to carry the same or even an increased workload, and performance may ultimately suffer because stress leads to mistakes. Service Center officials said there had already been incidents such as threats of violence and bizarre behavior brought on by stress. A union official concurred that some performance problems had emerged, particularly the ability to provide all the computer support needed. Employees we spoke with, for the most part, agreed with headquarters officials that downsizing had not yet led to performance problems, although some said downsizing had caused a loss of expertise. We found no BOR-wide performance measurement data to use in corroborating officials’ views that performance had been maintained. The Power Programs’ Power Management Laboratory had identified a number of fiscal year 1994 measures, such as FTEs per operating unit and per megawatt, but there were no data for other fiscal years. A Power Program official said data for other fiscal years were being gathered but would not be available for several months, and consequently there were no data showing performance trends during downsizing. The 1994 data showed that BOR was performing within an acceptable range of the power industry’s standards. At the Reclamation Service Center, an official suggested one performance measure would be whether its Technical Service Center unit broke even each year. The official pointed out that, although the Technical Service Center suffered a deficit of about $180,000 in its first year of operation as a self-supporting activity in fiscal year 1995, it earned a surplus of about $270,000 in its second year even after recovering the previous year’s deficit. BOR officials said the agency was developing performance measures in compliance with the Government Performance and Results Act. BOR officials told us that, based on informal feedback, their customers remained satisfied with their work. One measure of satisfaction cited was that Technical Service Center customers continued to seek and pay for services. Furthermore, an official said, downsizing had benefited customer satisfaction because it forced BOR employees to become more customer-oriented. Employees we spoke with were not unanimous, but most employees felt that customers remained satisfied. One employee echoed management’s statement that downsizing had benefited customer satisfaction because BOR employees had become more customer-oriented and added that having fewer people on projects resulted in more direct communication with customers about routine matters. On the other hand, one employee said BOR had been unable to adequately service two federal agencies and a water district, and another employee said it was hard to provide staff for all of the unit’s projects. A BOR official said there were no agencywide customer satisfaction data; however, BOR was developing an agencywide customer satisfaction survey that it hoped to administer at 3-year intervals. BOR’s Power Program surveyed 942 customers in 1995 and found that 84 percent of the respondents thought BOR was doing a good to excellent job. There were no predownsizing data for comparison, but the Power Program intended to continue seeking customer feedback in the future. In the absence of BOR-wide customer satisfaction data, we interviewed a small number of customers. BOR provided customer lists containing 627 customers. Customers included other federal agencies, international customers, and state agencies, but most of them were water districts located in rural areas in the western United States. From the 627 customers, we randomly selected 10 to survey of which 9 were rural water districts and 1 was a state agency. We asked them if their satisfaction with BOR’s performance had changed since 1992 and if their satisfaction had been affected by downsizing. Two organizations denied being BOR customers, and one did not respond. Of the remaining seven, one was satisfied with BOR’s performance, five were not satisfied, and one reported declining satisfaction. Reasons cited for dissatisfaction included longer turnaround time for decisions, diminished technical support, increased reporting requirements, and higher water fees. However, the most common reason cited for dissatisfaction was customers’ belief that BOR is prone to favor the water demands of politically powerful groups, such as large population centers and environmental groups, at the expense of rural farmers. Four of the dissatisfied customers did not think that downsizing caused their dissatisfaction, and two were not sure. BOR officials identified a number of lessons learned they believed helped maintain performance amid downsizing. First, officials said that agencies should include employees in planning and implementing the downsizing. The officials believed it was impossible to communicate with employees too much, and said to be open and honest with them. If there must be a reduction-in-force, officials said it should be conducted without favoritism even though there are some employees managers may not want to lose. By adhering to this principle, they said only two appeals resulted from BOR’s reduction-in-force, both of which they said were quickly resolved. Officials also stressed the need to plan for buyouts. Although BOR’s first buyout round was open to everyone, by phasing the time when employees left, officials said BOR prevented a sudden loss of expertise. In addition, the officials cited the need to provide training for employees in coping with downsizing, and to also provide them time to talk out troubling issues with their peers. One official said, in addition to rewarding employees, agencies should also hold them accountable for their actions. The official said BOR cannot afford to tolerate poor performers since it has downsized and relies on customer reimbursement for funding. The General Services Administration (GSA) reduced its workforce by 4,535 FTEs between fiscal years 1993 and 1996. As shown in table III.1, the Public Buildings Service (PBS) accounted for the largest percentage of GSA’s downsizing. Actions taken that helped PBS maintain performance during downsizing can be categorized into two general areas: (1) reengineering work processes and (2) PBS taking steps to build and maintain employee skills. According to its fiscal year 1998 budget overview, GSA’s mission is to improve the effectiveness of the federal government by ensuring quality work environments for its employees. To that end, GSA began moving from being a mandatory source of services to being a provider of choice, which must compete with other providers in terms of cost, quality, and timeliness. GSA reported it is increasingly competing effectively for customer purchases of real property services. In support of GSA’s mission, PBS is responsible for the design, construction, management, operation, alteration, and remodeling of space, owned and leased, in which accommodations for government activities are provided, and where authorized, for the acquisition, use, custody, and accountability of GSA real property and related personal property. In addition, PBS has responsibility for providing leadership in the development and maintenance of needed property management information systems for the government. In January 1995, PBS reengineered its work processes to align itself more closely with private sector business practices, allow regional offices to operate more independently, and fill gaps left by downsizing. PBS decentralized property development operations to field offices to allow for increased contact with customers. In July 1996, GSA implemented the “Can’t Beat GSA Leasing” program to reduce delivery times and enhance cost-effectiveness by cutting procedures and offering greater competition and choices to federal agencies. In November 1996, it initiated the “Can’t Beat GSA Space Alterations” program for the procurement of construction services that aim to be better, cheaper, and faster for customers. According to an official, PBS also solicited several national real estate services to identify private sector service providers with which PBS could contract to deliver leasing services to federal agencies. The official said these contracts will allow PBS’ smaller staff to continue to satisfy customers by outsourcing routine transactional details. Further, the official said PBS planned, in fiscal year 1998, to begin transitioning its automated data processing system from multiple applications operating on an antiquated mainframe computer to the use of integrated commercial applications to provide on-line transaction processing, permit data sharing, and support an easy to use query facility. A PBS official said PBS lacked the necessary skills mix suited to today’s mission; however, it was developing the necessary mix, for example, by retraining staff in asset management and empty building space disposal. The official further said that PBS was losing experienced employees, forcing those remaining to assume higher-level responsibilities, but this situation also allowed PBS to train people to replace lost managers by providing opportunities for employees to act in management roles. The official added that PBS would have sufficient staff with the appropriate skills and experience to maintain performance only if its improved automated data processing system is successfully implemented. PBS employee representatives differed in their views about whether PBS had the necessary employee skill mix. Officials of one union believed that PBS did not have the appropriate skill mix and experience to maintain performance, while an official of another union believed the skill mix and experience were sufficient to maintain acceptable performance. An employee association official also believed that PBS currently had a sufficient skill and experience mix and added that GSA had greatly increased employee training. A PBS official said it was not possible to describe the effects of downsizing alone on PBS performance because it occurred concurrently with changes GSA had already planned to make before downsizing was mandated. However, the official said streamlining its operations enabled PBS to maintain its performance, and implementation of the new data processing system planned for fiscal year 1998 will further enhance its ability to maintain performance. In addition, the official said downsizing forced PBS to implement changes faster, and in that respect, downsizing had been healthy. Employee representatives we spoke with disagreed about the effect of downsizing on PBS performance. Officials of one union believed it had been affected because constant change did not allow people to settle in and learn their jobs and because, in his opinion, contractor employees cannot perform the work as well as federal employees. An official of another union believed performance had not been greatly affected because of good planning and preparation by the agency. An employee association official said performance was initially affected because employees were placed in jobs for which they were not qualified, and experienced employees were replaced by temporary workers. The PBS official said GSA did not have good baseline performance measurement data because it had historically done little performance measurement; however, it is now focusing its attention on developing performance measures to meet Results Act requirements. PBS had developed performance measures under a Results Act pilot project, but they had been evolving from year to year, and there were little data showing trends. The data did show, however, that PBS met or exceeded more than half of the pilot project goals it set for itself during fiscal years 1994, 1995, and 1996. PBS surveyed its buildings’ tenants between fiscal years 1993 and 1996, and the results showed an upward trend in satisfaction ranging from 74 percent in fiscal year 1993 to 77 percent in fiscal year 1996. However, because different buildings’ tenants were surveyed in different years, the results did not measure changes in satisfaction of the same tenants. Union officials we spoke with disagreed on the extent of customer satisfaction. One union believed that the customer survey data misrepresented customer satisfaction because of a low response rate; however, another union believed that customer satisfaction was improving. A PBS official said GSA made a mistake in its first round of buyouts by not targeting them. In some areas and occupations, too many employees left, while in others, too few left, causing a mismatch between buyout results and organization needs. GSA had to use the staff who remained as best it could to repair the damage. The official said it was also a mistake for GSA to offer deferred buyouts over an 18-month period. Although deferred buyouts gave GSA more time to adjust to a downsized workforce, according to the official, the motivation of employees who knew they would be leaving was never the same. NASA reduced its workforce by nearly 4,000 FTEs between fiscal years 1993 and 1996. Table IV.1 shows the components with the largest percentage in downsizing. As table IV.1 shows, NASA’s Human Space Flight Program experienced the largest percentage FTE reduction between fiscal years 1993 and 1996. Table IV.2 shows downsizing at Johnson, Kennedy, Marshall, and Stennis space centers, which are part of the Human Space Flight Program. Percent of FY 1993 FTEs reduced (KSC) Actions taken that helped KSC maintain performance can be categorized into three general areas: (1) refocusing its mission (NASA), (2) reengineering its work processes (KSC), and (3) taking steps to build and maintain employee skills (KSC). According to the Administrator of NASA, NASA’s mission encompasses the following: (1) explore, use, and enable the development of space for human enterprise; (2) advance scientific knowledge and understanding of the Earth, the solar system, and the universe; (3) use the environment of space for research; and (4) research, develop, verify, and transfer advanced aeronautics, space, and related technologies. NASA has shifted the focus of its mission from operations to research and development. It has cut back on operations, bought commercial services from the private sector, and focused its efforts on technology development. In carrying out its part of NASA’s refocused mission, KSC designs, constructs, operates, and maintains space vehicle facilities and ground support equipment for launch and recovery operations. It maintains responsibility for prelaunch and launch operations, payload processing for the space shuttle and expendable launch vehicle programs, landing operations for the space shuttle orbiter, and recovery and refurbishment of the reusable solid rocket booster. As NASA refocused on being a high-tech research and development agency, it turned over more of its operations to contractors and in September 1996, it awarded a space flight operations contract to United Space Alliance. This contract consolidated a number of existing contracts under one prime contractor and gave the prime contractor overall responsibility for space shuttle operations, including orbiter vehicles, solid rocket boosters, external fuel tank, flight crew equipment, ground support systems, and integration of payloads. The space shuttle program remained NASA managed; however, according to KSC officials KSC changed from its traditional oversight role to “insight.” Under oversight, KSC maintained continual surveillance over the contractor, telling it not only what to do but how to do it. Under insight, KSC will directly oversee contractor processes only periodically. KSC officials said they will maintain technical visibility through audit, surveillance, assessment of trends, software verification, the flight readiness review process, and independent assessment of problems. KSC officials believed KSC had the workforce needed to carry out its shuttle operations; however, they were concerned about the future. Because KSC programs had lost “centuries” of operating and engineering knowledge, the officials worried about having the appropriate skills mix and experience to maintain performance as downsizing continued. Employees we spoke with generally agreed that KSC’s skill mix and experience remained adequate, but some employees believed that if downsizing continued, skills and experience would become inadequate. To help ensure that KSC would continue to have needed skills, its fiscal year 1997 buyout plan was designed to limit skill loss by limiting the number of buyouts in shuttle processing, safety and mission assurance, and payload processing. Further, some senior executive service positions, for example, shuttle processing and safety and mission assurance program directors, were excluded from buyout eligibility. In addition, KSC officials said they were planning for the succession of managers and other senior people that did leave. KSC instituted individual development plans for future managers and, as part of its senior executive service candidate program, offered programs in management development, project management, and skills training. To prepare for work on the international space station, KSC was cross-utilizing people currently working on the space laboratory program, which was winding down. KSC officials said that KSC was still able to perform its mission. However, they were concerned about retaining the human resources needed to react to problems, to meet unplanned and new requirements, and to sustain the work as the workforce continued to decrease. Most of the employees we spoke with believed downsizing had not yet affected KSC’s performance or shuttle safety. One employee, however, believed downsizing had begun affecting performance and said the quality of safety inspections would decline if personnel were not restored and the workload was not reduced. The employee believed safety of the shuttle program had been affected and cited a wrench left inside a solid rocket booster and water spilled on a maneuver pod as causes for concern. Other employees said, although the work gets done, they were concerned about the effect of further downsizing or overload of remaining employees on performance. Performance measurement data showed KSC maintained its shuttle launch schedule at lower cost during downsizing. In addition, as flight costs decreased, quality increased as measured by the decrease in the number of in-flight problems caused by ground processing. According to KSC surveys, customers remained satisfied with KSC’s performance during downsizing. Payload customers rated KSC’s service on a five-point scale ranging from 1 for poor service to 5 for excellent service. Ratings during downsizing were 4.2 in 1993, 4.3 in 1994, 4.2 in 1995, and 4.2 in 1996. KSC found the apparent leveling off of satisfaction disturbing but attributed it to several factors: (1) during the survey’s early years, KSC concentrated on improving those issues that drew the most frequent customer comments, but subsequently it concentrated on smaller, but important, improvements; (2) inconsistent methods for counting survey results may have skewed the results; and (3) as KSC’s performance improved, customers came to expect even more from it and became more critical in their survey responses. KSC viewed this critical customer feedback as positive because its customers recognized its commitment to improving customer service and became increasingly forthcoming with suggestions for improvements. Employees we spoke with also believed that customers so far remained satisfied with KSC’s performance. As KSC downsized and transitioned to the space flight operations contract negotiated with United Space Alliance, concern grew about the safety of the space shuttle. This led to a review by NASA’s Aerospace Safety Advisory Panel of issues associated with the safe operation and management of the space shuttle program. The panel’s conclusion concurred with NASA officials’ beliefs that shuttle safety had not been adversely affected. The panel found that NASA’s efforts to streamline the space shuttle program had not created unacceptable risks. However, the panel also said there was a clear need for NASA to take steps to ensure the availability of a skilled and experienced civil service workforce in sufficient numbers to meet ongoing safety needs. The panel said these personnel issues were challenging and had the potential to adversely affect risk in the future. The panel said the space flight operations contract appeared to be a comprehensive and workable document espousing safety as paramount throughout. It also said there were minimal adverse safety implications, especially in the short term, largely because the people currently in place were dedicated to making the new scheme work. However, the panel was concerned with the loss of critical skills and experience among NASA personnel over the long term. It said that NASA should not be misled by the apparent initial success of all the transition efforts and that a major test of the new approach would likely be faced after there was significant turnover among incumbents at all levels. KSC officials identified a number of lessons learned that helped maintain performance during downsizing. The officials said agencies should recognize that they are going to have to downsize, be proactive, and not wait for downsizing to happen before acting. They said unions, employee associations, and employees should be involved in developing the agency downsizing implementation strategy. The officials said communication with employees should be open and honest. Communication, the officials said, builds credibility, while silence makes workers think something is going on behind the scenes, and openness helps retain key people by reducing their concerns about their jobs. The officials suggested that agencies should do positive things for employees—for example, hold job fairs, which promote the message that the agency is trying to help them, and offer training courses to help people cope with change. The officials said employee anxiety should be recognized and addressed. They believed that employees should be treated with compassion and should know that they are valued by the agency. Employees should be told the agency does not want them to leave, but if they do leave, respect them for taking actions they feel are in their own best interests. The officials said agencies should back up critical skills so that, if people leave, the agency still has employees with those skills. The Office of Personnel Management (OPM) reduced its workforce by 2,489 FTEs between fiscal years 1993 and 1996. Components experiencing the largest downsizing percentages are shown in table V.1. Although the Investigations Service downsized by 61 percent according to its full fiscal year 1996 usage, the OPM GAO Liaison noted that if the Investigations Service downsizing was measured using the FTE complement at the close of fiscal year 1996, a reduction of 96 percent occurred from the fiscal year 1993 level. The privatization of Investigations occurred in the last quarter of fiscal year 1996, which dramatically lowered the end-of-year staffing level. Actions taken that helped OPM’s Investigations Service maintain performance during downsizing can be categorized into two general areas. The Investigations Service (1) refocused its mission, and (2) reengineered its work processes. Among other things, OPM’s mission includes supporting agencies in merit-based examining and hiring. OPM oversees the merit principles and hiring and retention procedures used by agencies to select applicants for competitive positions in the federal service at general schedule grades and for federal wage system positions. Personnel background investigations are used in support of the selection and appointment process. The Office of Investigations formerly performed these background investigations of federal employees, contractors, and applicants to provide a basis for determining an individual’s suitability for federal employment and whether an individual should be granted clearance for access to national security information. Investigation Service officials said they began downsizing the Investigations Service in 1993 by offering buyouts to employees. In May 1994 the Investigations Service laid off approximately 440 (of about 1,440) employees. As a result of continuing downsizing and reinvention of government initiatives, the investigations function was privatized in 1996 through the establishment of a private corporation known as the US Investigations Service, Inc. (USIS). USIS’ workforce, with the exception of people with specialized skills, primarily marketing, finance, and human resources, was drawn from OPM’s Investigations Service staff. At the time the Investigations Service was privatized, approximately 90 percent of those who worked in the office and received reduction-in-force notices accepted USIS job offers at the same salary, and with comparable benefits. The other 10 percent either stayed as part of OPM’s Investigations Service, transferred to another agency, or retired. With a total staff of about 40 individuals, OPM’s Investigations Service currently limits its functions to policy, agency oversight, contract management, processing of Freedom of Information and Privacy Act requests, adjudicating cases, and the making of suitability determinations. The Investigations Service reengineered its work processes, which enabled it to maintain performance during downsizing. No longer designed to do background investigations, the Investigations Service oversees the government’s contract with USIS. Officials told us that the creation of USIS, an employee-owned firm (owned by former federal civil servants) and the subsequent award of a 3-year contract to USIS to conduct federal background investigations, resulted in a seamless transition for OPM’s former federal customers. An Investigations Service official said performance was maintained or improved even as investigations were privatized. The official said that USIS completed about 20 percent more investigations in fiscal year 1997 than the Investigations Service did in fiscal year 1996, and also maintained the Service’s timeliness record. In addition, the official said there had been no decrease in the quality of cases processed by USIS. A May 1997 USIS employee survey showed that 85 percent of the respondents recognized the importance of quality, and 80 percent believed that USIS puts the customer’s needs first. Investigations Service officials believed that USIS’ customers were satisfied with its investigations; however, we found no customer satisfaction survey data to support this position. Two customers we spoke with said there had been no change in their respective satisfaction levels since prior to privatization. An OPM employee association official said there was no indication that customers were dissatisfied. An OPM union representative said that the union wanted to be more proactive and discuss the downsizing and possible impacts with OPM management early on, but that initial communications between management, employees, and union representatives were not very good. An Investigations Service official identified open communication as a lesson learned during the Investigation Service’s downsizing and ensuing privatization. The official said there was no such thing as too much communication, and there should be open lines of communication whereby information can be passed in all directions. The official added that agencies and components should realize they will not be able to do everything and should concentrate on their most critical areas and functions. They should listen to their customers and ensure that their satisfaction is taken into account before making major decisions. Robert P. Pickering, Evaluator-in-Charge Robert W. Stewart, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Housing and Urban Development (HUD), the Department of the Interior, the General Services Administration (GSA), the National Aeronautics and Space Administration (NASA), and the Office of Personnel Management (OPM) to obtain information on the effects downsizing has had on their performance and what actions were taken to maintain performance, focusing on: (1) which components within the five agencies were downsized and to what extent; (2) what actions were taken to maintain performance for one selected downsized component at each parent agency, the results of those actions on the component's performance, and the effect of the downsizing on customer service; and (3) the lessons that the five components learned about maintaining performance during a period of downsizing. GAO noted that: (1) most components within the five parent agencies were downsized to some extent, although how much varied considerably; (2) the percentage of agency components' full-time equivalent reductions from fiscal year (FY) 1993 through 1996 ranged from 3 percent to 100 percent at HUD, 2 percent to 87.5 percent at Interior, 10 percent to 37 percent at GSA, 3 percent to 42 percent at NASA, and 2 percent to 100 percent at OPM; (3) according to officials of the parent agencies and the five selected components, several actions helped the components maintain performance levels during the period of downsizing; (4) they explained that it was difficult to isolate actions taken independently of downsizing from those taken because of downsizing; (5) however, the actions the officials told GAO about generally fell into three categories: (a) refocusing of missions; (b) reengineering of work processes; and (c) building and maintaining employee skills; (6) the officials stated that the five components were generally able to maintain performance and fulfill the requirements of their missions despite the relatively large downsizing that occurred from FY 1993 to FY 1996; (7) although the officials stated that they could not connect specific actions taken with specific outcomes, they stated that without the three actions mentioned, the performance levels of the components would not have been maintained; (8) officials at some components stated that additional downsizing could hamper future performance; (9) it should be noted that GAO's results primarily reflect the viewpoints of officials from the agencies and components and are a snapshot at the time of its review; (10) performance measurement data, particularly baseline data with which current data could be compared, that would support agency officials' views or enable policymakers to track program performance and make informed decisions were limited; (11) the data that were available tended to substantiate the views of component officials that they were meeting goals they had set for themselves; (12) according to component officials and employees or their representatives at the five components, customers remained satisfied with the components' performance during the period of downsizing; and (13) among the lessons learned, officials stated that the most important was the need for early planning and open communication with employees.
The Chesapeake Bay is the largest of the nation’s estuaries, measuring nearly 200 miles long and 35 miles wide at its widest point. Roughly half of the bay’s water comes from the Atlantic Ocean, and the other half is freshwater that drains from the land and enters the bay through the many rivers and streams in its watershed basin. As shown in figure 1, the bay’s watershed covers 64,000 square miles and spans parts of six states— Delaware, Maryland, New York, Pennsylvania, Virginia, and West Virginia—and the District of Columbia. Over time, the bay’s ecosystem has deteriorated. The bay’s “dead zones”— where too little oxygen is available to support fish and shellfish—have increased, and many species of fish and shellfish have experienced major declines in population. The decline in the bay’s living resources has been cause for a great deal of public and political attention. Responding to public outcry, on December 9, 1983, representatives of Maryland, Pennsylvania, and Virginia; the District of Columbia; EPA; and the Chesapeake Bay Commission signed the first Chesapeake Bay agreement. Their agreement established the Chesapeake Executive Council and resulted in the Chesapeake Bay Program—a partnership that directs and conducts the restoration of the bay. The signatories to the agreement reaffirmed their commitment to restore the bay in 1987 and again in 1992. The partners signed the most current agreement, Chesapeake 2000, on June 28, 2000. Chesapeake 2000—identified by the Bay Program as its strategic plan—sets out an agenda and goals to guide the restoration efforts through 2010 and beyond. In Chesapeake 2000, the signatories agreed to 102 commitments—including management actions, such as assessing the trends of particular species, as well as actions that directly affect the health of the bay. These commitments are organized under the following five broad restoration goals: Protecting and restoring living resources—14 commitments to restore, enhance, and protect the finfish, shellfish and other living resources, their habitats and ecological relationships to sustain all fisheries and provide for a balanced ecosystem; Protecting and restoring vital habitats—18 commitments to preserve, protect, and restore those habitats and natural areas that are vital to the survival and diversity of the living resources of the bay and its rivers; Protecting and restoring water quality—19 commitments to achieve and maintain the water quality necessary to support the aquatic living resources of the bay and its tributaries and to protect human health; Sound land use—28 commitments to develop, promote, and achieve sound land use practices that protect and restore watershed resources and water quality, maintain reduced pollutant inputs to the bay and its tributaries, and restore and preserve aquatic living resources; and Stewardship and community engagement—23 commitments to promote individual stewardship and assist individuals, community- based organizations, businesses, local governments, and schools to undertake initiatives to achieve the goals and commitments of the agreement. As the only federal signatory to the Chesapeake Bay agreements, EPA is responsible for spearheading the federal effort within the Bay Program through its Chesapeake Bay Program Office. Among other things, the Chesapeake Bay Program Office is to develop and make available information about the environmental quality and living resources of the Chesapeake Bay ecosystem; help the signatories to the Chesapeake Bay agreement develop and implement specific plans to carry out their responsibilities; and coordinate EPA’s actions with those of other appropriate entities to develop strategies to improve the water quality and living resources in the Chesapeake Bay ecosystem. In October 2005, we found that the Bay Program had established 101 measures to assess progress toward meeting some restoration commitments and provide information to guide management decisions. For example, the Bay Program had developed measures for determining trends in individual fish and shellfish populations, such as crabs, oysters, and rockfish. The Bay Program also had a measure to estimate vehicle emissions and compare them to vehicle miles traveled to help establish reduction goals for contaminants found in these emissions. While the Bay Program had established these 101 measures, we also found that it had not developed an approach that would allow it to translate these individual measures into an overall assessment of the progress made in achieving the five broad restoration goals. For example, although the Bay Program had developed measures for determining trends in individual fish and shellfish populations, it had not yet devised a way to integrate those measures to assess the overall progress made in achieving its Living Resource Protection and Restoration goal. According to an expert panel of nationally recognized ecosystem assessment and restoration experts convened by GAO, in a complex ecosystem restoration project like the Chesapeake Bay, overall progress should be assessed by using an integrated approach. This approach should combine measures that provide information on individual species or pollutants into a few broader- scale measures that can be used to assess key ecosystem attributes, such as biological conditions. According to an official from the Chesapeake Bay Program Office, the signatories to the Chesapeake Bay agreement had discussed the need for an integrated approach for several years, but until recently it was generally not believed that, given limited resources, the program could develop an approach that was scientifically defensible. The program began an effort in November 2004 to develop, among other things, a framework for organizing the program’s measures and a structure for how the redesign work should be accomplished. In our 2005 report, we recommended that the Chesapeake Bay Program Office complete its efforts to develop and implement such an integrated approach. In response to our recommendation, a Bay Program task force identified 13 key indicators for measuring the health of the bay and categorized these indicators into 3 indices of bay health. With the development of these indices, the Bay Program should be in a better position to assess whether restoration efforts have improved the health of the bay. These indices will also help the Bay Program determine whether changes are needed to its planned restoration activities. The task force also identified 20 key indicators for measuring the progress of restoration efforts and categorized these indicators into 5 indices of restoration efforts. According to the Bay Program, these indices are now being used to assess and report on the overall progress made in restoring the bay’s health and in implementing restoration efforts. The Bay Program has linked these restoration effort indices to the overall restoration goals and this should help the program better evaluate the progress it has made toward meeting the overall goals. In 2005, we determined that the Bay Program’s primary mechanism for reporting on the health status of the bay—the State of the Chesapeake Bay report—did not effectively communicate the current health status of the bay. This was because it mirrored the shortcomings in the program’s measures by focusing on the status of individual species or pollutants instead of providing information on a core set of ecosystem characteristics. For example, the 2002 and 2004 State of the Chesapeake Bay reports provided data on oysters, crab, rockfish, and bay grasses, but the reports did not provide an overall assessment of the current status of living resources in the bay or the health of the bay. Instead, data were reported for each species individually. The 2004 State of the Chesapeake Bay report included a graphic that depicted oyster harvest levels at historic lows, with a mostly decreasing trend over time, and a rockfish graphic that showed a generally increasing population trend over time. However, the report did not provide contextual information that explained how these measures were interrelated or what the diverging trends meant about the overall health of the bay. The experts we consulted agreed that the 2004 report was visually pleasing but lacked a clear, overall picture of the bay’s health and told us that the public would probably not be able to easily and accurately assess the current condition of the bay from the information reported. We also found that the credibility of the State of the Chesapeake Bay reports had been undermined by two key factors. First, the Bay Program had commingled data from three sources when reporting on the health of the bay. Specifically, the reports mixed actual monitoring information on the bay’s health status with results from a predictive model and the progress made in implementing specific management actions, such as acres of wetlands restored. The latter two results did little to inform readers about the current health status of the bay and tended to downplay the bay’s actual condition. Second, the Bay Program had not established an independent review process to ensure that its reports were accurate and credible. The officials who managed and were responsible for the restoration effort also analyzed, interpreted, and reported the data to the public. We believe this lack of independence in reporting led to the Bay Program’s projecting a rosier view of the health of the bay than may have been warranted. Our expert panelists also told us that an independent review panel—to either review the bay’s health reports before issuance or to analyze and report on the health status independently of the Bay Program—would significantly improve the credibility of the program’s reports. In 2005, we recommended that the Chesapeake Bay Program Office revise its reporting approach to improve the effectiveness and credibility of its reports by (1) including an assessment of the key ecological attributes that reflect the bay’s current health conditions, (2) reporting separately on the health of the bay and on the progress made in implementing management actions, and (3) establishing an independent and objective reporting process. In response to our recommendation that reports should include an ecological assessment of the health of the bay, the Bay Program has developed and used a set of 13 indicators of bay health to report on the key ecological attributes representing the health of the bay. In response to our recommendation that the program should separately report on the health of the bay and management actions, the Bay Program has developed an annual reporting process that distinguishes between ecosystem health and restoration effort indicators in its annual report entitled Chesapeake Bay Health and Restoration Assessment. The most recent report, entitled Chesapeake Bay 2007 Health and Restoration Assessment, is divided into four chapters: chapter one is an assessment of ecosystem health, chapter two describes factors impacting bay and watershed health, chapter three is an assessment of restoration efforts, and chapter four provides a summary of local water quality assessments. We believe that the new report format is a more effective communications framework and clearly distinguishes between the health of the bay and management actions being taken. In response to our recommendation to establish an independent and objective reporting process, the Bay Program has charged its Scientific and Technical Advisory Committee with responsibility for assuring the scientific integrity of the data, indicators, and indices used in the Bay Program’s publications. In addition, the Bay Program instituted a separate reporting process on the bay’s health by the University of Maryland Center for Environmental Science. This report, which is released on the same day as the Bay Program’s release of the Chesapeake Bay Health and Restoration Assessment, provides an assessment of the bay’s health in a report card format. While we recognize that the changes are an improvement over the reporting process that was in place in 2005, we remain concerned about the lack of independence in the process. Although members of the Scientific and Technical Advisory Committee are not managing the day-to-day program activities, this committee is a standing committee of the Bay Program and provides input and guidance to the Bay Program on how to develop measures to restore and protect the Chesapeake Bay. In addition, we do not believe that the report card prepared by the University of Maryland Center for Environmental Science is as independent as the Bay Program believes, because several members of the Scientific and Technical Advisory Committee are also employees of the University of Maryland Center for Environmental Science. We therefore continue to believe that establishing a more independent reporting process would enhance the credibility and objectivity of the Bay Program’s reports. From fiscal years 1995 through 2004, we reported that 11 key federal agencies; the states of Maryland, Pennsylvania, and Virginia; and the District of Columbia provided almost $3.7 billion in direct funding to restore the bay. Federal agencies provided a total of approximately $972 million in direct funding, while the states and the District of Columbia provided approximately $2.7 billion in direct funding for the restoration effort over the 10-year period. Of the federal agencies, the Department of Defense’s U.S. Army Corps of Engineers provided the greatest amount of direct funding—$293.5 million. Of the states, Maryland provided the greatest amount of direct funding—more than $1.8 billion—which is over $1.1 billion more than any other state. Typically, the states provided about 75 percent of the direct funding for restoration, and the funding has generally increased over the 10-year period. As figure 2 shows, the largest percentage of direct funding—approximately 47 percent—went to water quality protection and restoration. Sound land use ($1.1 billion) Water quality protection and restoration ($1.7 billion) We also reported that 10 of the key federal agencies, Pennsylvania, and the District of Columbia provided about $1.9 billion in additional funding from fiscal years 1995 through 2004 for activities that indirectly affect bay restoration. These activities were conducted as part of broader agency efforts and/or would continue without the restoration effort. Federal agencies provided approximately $935 million in indirect funding, while Pennsylvania and the District of Columbia together provided approximately $991 million in indirect funding for the restoration effort over the 10-year period. Of the federal agencies, the U.S. Department of Agriculture provided the greatest amount of indirect funding—$496.5 million—primarily through its Natural Resources Conservation Service. Of the states, Pennsylvania provided the greatest amount of indirect funding—$863.8 million. As with direct funding, indirect funding for the restoration effort had also generally increased over fiscal years 1995 through 2004. As figure 3 shows, the largest percentage of indirect funding—approximately 44 percent—went to water quality protection and restoration. Despite the almost $3.7 billion in direct funding and more than $1.9 billion in indirect funding that had been provided to restore the bay, the Chesapeake Bay Commission estimated in a January 2003 report that the restoration effort faced a funding gap of nearly $13 billion to achieve the goals outlined in Chesapeake 2000 by 2010. Subsequently, in an October 2004 report, the Chesapeake Bay Watershed Blue Ribbon Finance Panel estimated that the restoration effort is grossly underfunded and recommended that a regional financing authority be created with an initial capitalization of $15 billion, of which $12 billion would come from the federal government. Although we did not recommend that the Bay Program consider developing a formal process for collecting and aggregating information on the amount of funding provided by the various restoration partners, the program has developed a database to capture this information. Recognizing the need to centrally and consistently account for the activities and funding sources of all Bay Program partners, the program created a Web-based form to collect information on the amount and source of funding being used and planned for restoration activities. Currently, the Bay Program has collected funding data for 2007 through 2009. However, according to the Bay Program, only the 2007 data— totaling $1.1 billion—represents a comprehensive, quality data set, and the program has plans to improve this database by having additional partners provide data and increasing the scope and quality of the information. In our 2005 report we found that although Chesapeake 2000 provides the current vision and overall strategic goals for the restoration effort, along with short- and long-term commitments, the Bay Program lacked a comprehensive, coordinated implementation strategy that could provide a road map for accomplishing the goals outlined in the agreement. In 2003, the Bay Program recognized that it could not effectively manage all 102 commitments outlined in Chesapeake 2000 and adopted 10 keystone commitments as a management strategy to focus the partners’ efforts. To achieve these 10 keystone commitments, the Bay Program had developed numerous planning documents. However, we found that these planning documents were not always consistent with each other. For example, the program developed a strategy for restoring 25,000 acres of wetlands by 2010. Subsequently, each state within the bay watershed and the District of Columbia developed tributary strategies that described actions for restoring over 200,000 acres of wetlands—far exceeding the 25,000 acres that the Bay Program had developed strategies for restoring. While we recognize that partners should have the freedom to develop higher targets than established by the Bay Program, we were concerned that having such varying targets could cause confusion, not only for the partners, but for other stakeholders about what actions are really needed to restore the bay, and such varying targets appeared to contradict the effort’s guiding strategy of taking a cooperative approach to achieving the restoration goals. We also found that the Bay Program partners had devoted a significant amount of their limited resources to developing strategies that were either not being used by the Bay Program or were believed to be unachievable within the 2010 time frame. For example, the program invested significant resources to develop a detailed toxics work plan for achieving the toxics commitments in Chesapeake 2000. Even though the Bay Program had not been able to implement this work plan because personnel and funding had been unavailable, program officials told us that the plan was being revised. It was therefore unclear to us why the program was investing additional resources to revise a plan for which the necessary implementation resources were not available, and which was also not one of the 10 keystone commitments. According to a Bay Program official, strategies are often developed without knowing what level of resources will be available to implement them. While the program knows how much each partner has agreed to provide for the upcoming year, the amount of funding that partners will provide in the future is not always known. Without knowing what funding will be available, the Bay Program has been limited in its ability to target and direct funding toward those restoration activities that will be the most cost effective and beneficial. As a result of these findings in 2005, we recommended that the Bay Program (1) develop a comprehensive, coordinated implementation strategy and (2) better target limited resources to the most effective and realistic work plans. In response to our recommendation to develop a comprehensive and coordinated implementation strategy, the Bay Program has developed a strategic framework to unify existing planning documents and articulate how the partnership will pursue its goals. According to the Bay Program, this framework is intended to provide the partners with a common understanding of the partnership’s agenda of work, a single framework for all bay protection and restoration work, and, through the development of realistic annual targets, a uniform set of measures to evaluate the partners’ progress in improving the bay. However, while this framework provides broad strategies for meeting the Bay Program’s goals, it does not identify the activities that will be implemented to meet the goals, resources needed to implement the activities, or the partner(s) who will be responsible for funding and implementing the activities. Therefore, we continue to believe that additional work is needed before the strategy that the Bay Program has developed can be considered a comprehensive, coordinated implementation strategy that can move the restoration effort forward in a more strategic and well-coordinated manner. In response to our recommendation that the program target resources to the most cost-effective strategies, according to the Bay Program, in addition to the strategic framework described above, it has developed annual targets that it believes are more realistic and likely to be an activity integration plan system to identify and catalogue partners’ current and planned implementation activities and corresponding resources; and program progress dashboards, which provide high-level summaries of key information, such as status of progress, summaries of actions and funding, and a brief summary of the challenges and actions needed to expedite progress. According to the Bay Program, it has also adopted an adaptive management process, which will allow it to modify the restoration strategy in response to testing, monitoring, and evaluating applied strategies and incorporating new knowledge, and thereby, better inform partners’ actions, emphasis, and future priorities. Bay Program officials told us that these actions have started to have the intended effects of promoting enhanced coordination among the partners, encouraging partners to review and improve their progress in protecting and restoring the bay, increasing the transparency of the Bay Program’s operations, and improving the accountability of the Bay Program and its partners for meeting the bay health and restoration goals. We believe these actions are positive steps toward responding to our recommendation and improving the management and coordination of the Bay Program. In addition, the Bay Program partners have established a funding priority framework that lists priorities for agriculture, wastewater treatment, and land management activities. While these priorities can be used to help achieve some of the annual targets established by the program, other annual targets—such as those for underwater bay grasses and oysters—do not have priorities associated with them. We believe that a clear set of priorities linked to the annual targets can help the partners focus the limited resources available to those activities that provide the greatest benefit to the health of the bay. In closing, Madam Chairwoman, it is well recognized that restoring the Chesapeake Bay is a massive, difficult, and complex undertaking. Our October 2005 report documented how the success of the program had been undermined by the lack of (1) an integrated approach to measure overall progress; (2) independent and credible reporting mechanisms; and (3) coordinated implementation strategies. These deficiencies had resulted in a situation in which the Bay Program could not present a clear and accurate picture of what the restoration effort had achieved, could not effectively articulate what strategies would best further the broad restoration goals, and could not identify how to set priorities for using limited resources. Since our report was issued, the Bay Program, with encouragement from Congress, has taken our recommendations seriously and has taken steps to implement them. The Bay Program has made important progress, and we believe that these initial steps will enable better management of the restoration effort. However, additional actions are still needed to ensure that the restoration effort is moving forward in the most cost-effective manner. Madam Chairwoman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Anu Mittal at (202) 512- 3841 or mittala@gao.gov. Other individuals making significant contributions to this testimony were Sherry McDonald, Assistant Director, and Barbara Patterson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Chesapeake Bay Program (Bay Program) was created in 1983 when Maryland, Pennsylvania, Virginia, the District of Columbia, the Chesapeake Bay Commission, and the Environmental Protection Agency (EPA) agreed to establish a partnership to restore the Bay. The partnership's most recent agreement, Chesapeake 2000, sets out five broad goals to guide the restoration effort through 2010. This testimony summarizes the findings of an October 2005 GAO report (GAO-06-96) on (1) the extent to which measures for assessing restoration progress had been established, (2) the extent to which program reports clearly and accurately described the bay's health, (3) how much funding was provided for the effort for fiscal years 1995 to 2004, and (4) how effectively the effort was being coordinated and managed. It also summarizes actions taken by the program in response to GAO's recommendations. GAO reviewed the program's 2008 report to Congress and discussed recent actions with program officials. In 2005, GAO found that the Bay Program had over 100 measures to assess progress toward meeting some restoration commitments and guide program management. However, the program had not developed an integrated approach that would translate these individual measures into an assessment of progress toward achieving the restoration goals outlined in Chesapeake 2000. For example, while the program had appropriate measures to track crab, oyster, and rockfish populations, it did not have an approach for integrating the results of these measures to assess progress toward its goal of protecting and restoring the bay's living resources. In response to GAO's recommendation, the Bay Program has integrated key measures into 3 indices of bay health and 5 indices of restoration progress. In 2005, the reports used by the Bay Program did not provide effective and credible information on the health status of the bay. Instead, these reports focused on individual trends for certain living resources and pollutants, and did not effectively communicate the overall health status of the bay. These reports were also not credible because actual monitoring data had been commingled with the results of program actions and a predictive model, and the latter two tended to downplay the deteriorated conditions of the bay. Moreover, the reports lacked independence, which led to rosier projections of the bay's health than may have been warranted. In response to GAO's recommendations, the Bay Program developed a new report format and has tried to enhance the independence of the reporting process. However, the new process does not adequately address GAO's concerns about independence. From fiscal years 1995 through 2004, the restoration effort received about $3.7 billion in direct funding from 11 key federal agencies; the states of Maryland, Pennsylvania, and Virginia; and the District of Columbia. These funds were used for activities that supported water quality protection and restoration, sound land use, vital habitat protection and restoration, living resources protection and restoration, and stewardship and community engagement. During this period, the restoration effort also received an additional $1.9 billion in funding from federal and state programs for activities that indirectly contribute to the restoration effort. In 2005, the Bay Program did not have a comprehensive, coordinated implementation strategy to help target limited resources to those activities that would best achieve the goals outlined in Chesapeake 2000. The program was focusing on 10 key commitments and had developed numerous planning documents, but some of these documents were inconsistent with each other or were perceived as unachievable by the partners. In response to GAO's recommendations, the Bay Program has taken several actions, such as developing a strategic framework to unify planning documents and identify how it will pursue its goals. While these actions are positive steps, additional actions are needed before the program has the comprehensive, coordinated implementation strategy recommended by GAO.
AFRICOM is DOD’s newest geographic combatant command and was designated fully operational in September 2008. The command’s area of responsibility comprises all the countries on the African continent except Egypt. According to AFRICOM officials, Camp Lemonnier, Djibouti, is the only DOD site in Africa with a long-term use agreement, but AFRICOM has established temporary facilities across the continent to house the personnel and equipment used for security assistance, training, and other operations in the region. AFRICOM is supported by its subordinate commands: U.S. Army Africa, U.S. Naval Forces Africa, U.S. Marine Corps Forces Africa, U.S. Air Force Africa, and Special Operations Command Africa. AFRICOM also has a subordinate joint force command, Combined Joint Task Force-Horn of Africa, which conducts operations in East Africa to enhance partner nation capacity, promote regional stability, dissuade conflict, and protect U.S. and coalition interests. U.S. Army Africa has a dedicated military contracting element, the 414th Contracting Support Brigade, which provides OCS planning assistance to U.S. Army Africa and synchronizes and executes contracting solutions across the continent. AFRICOM’s subordinate commands are located in Europe and do not have assigned forces, except for the joint task force, which is located at Camp Lemonnier in Djibouti. Since its establishment AFRICOM has increased its footprint on the continent to support the command’s missions of building African partner defense capabilities, responding to crises, and deterring transnational threats in order to promote regional security. In tandem with AFRICOM’s growing footprint there has been a continued reliance on contractors to provide, among other things, logistical, transportation, and intelligence support to the command’s missions. This reliance on contractors was highlighted in September 2014 when AFRICOM supported the U.S. Agency for International Development and Liberia under Operation United Assistance in responding to the largest Ebola epidemic in history. AFRICOM reported having approximately 5,600 personnel on the African continent in October 2014, of whom 569—approximately 10 percent—were U.S. citizen contractor personnel. Moreover, the dollar value associated with DOD contracts in AFRICOM’s area of responsibility more than doubled, from $147 million in fiscal year 2009 to $463 million in fiscal year 2014. While the dollar value associated with these contracts decreased in most regions from fiscal year 2013 to fiscal year 2014 (see figure 1), AFRICOM officials expect the need for contract support to increase as regional instability and responsibilities for protecting U.S. personnel and facilities continue. OCS is the process of planning for and obtaining supplies, services, and construction from commercial sources in support of joint operations. OCS encompasses three functions: contract support integration, contracting support, and contractor management. These functions and their associated tasks are described in figure 2. Determining OCS requirements, as well as conducting the initial planning for and coordination of these tasks, is primarily an operational, not a contracting, function. As a result, all of the directorates within a command have OCS roles or responsibilities, but the directorates may not fully understand how to properly define requirements for and manage OCS. In the 2014 update to Joint Publication 4-10 the Joint Staff introduced the concept of an OCS Integration Cell as a central coordination point that can, among other things, provide staff integration by assisting and advising the directorates on these issues. Figure 3, based on information from the Joint Staff, demonstrates the integrating role of an OCS Integration Cell. Personnel accountability is generally the process by which the geographic combatant command identifies, captures, and records information— usually through the use of a database—about the number and type of personnel, including contractor personnel, in its area of operations. DOD’s Synchronized Predeployment and Operational Tracker (SPOT) was developed to assist the combatant commander in ensuring that accountability, visibility, force protection, medical support, personnel recovery, and other related support can be accurately forecasted and provided for the contractor personnel in the command’s area of responsibility. SPOT contains personnel accountability data only for contractor personnel (U.S. citizen, third country, and local national), not military or civilian personnel. When specified by contract, vendors are responsible for inputting and maintaining their employees’ personnel data in SPOT throughout the employees’ periods of performance. SPOT and service-specific accountability systems typically feed information into DOD’s accountability system to produce joint personnel status reports. DOD contracting officers are required to check prospective contractors’ names against a list of prohibited, restricted, or otherwise ineligible entities in a U.S. government contracting database called the System for Award Management. The prohibited entities list in the System for Award Management contains sanctioned and other prohibited entities identified by the Department of Treasury, Department of State, DOD, and other U.S. agencies. A foreign vendor vetting cell can also be established to vet selected local national and third county national companies so as to prevent DOD from awarding contracts to companies having ties to violent extremist organizations or other inappropriate entities. The vetting consists of researching the principal owners and connections of the company using information provided by owners, and then cross-checking this information against various intelligence sources. Subordinate and individual base commanders are responsible for the security of military facilities within the combatant commander’s area of responsibility. A contractor employee screening process mitigates the potential security risk that third country and local national contractor personnel may pose when they have access to DOD facilities. While there is no standard DOD methodology for contractor employee screenings overseas, a background investigation or similar process to confirm the trustworthiness of contractor personnel should be conducted to the greatest extent possible. AFRICOM established a formal OCS structure with dedicated personnel—including an OCS Integration Cell—at the headquarters level to serve as the central coordination point to integrate OCS in all directorates, and to manage, plan for, and assess OCS. However, only one of AFRICOM’s subordinate commands has an organizational structure in place with dedicated personnel to manage and plan for OCS. Furthermore, AFRICOM has developed a scorecard to assess OCS capabilities at the subordinate command level, but portions of each assessment of the six subordinate commands have been inaccurate because it is unclear how the performance measures in scorecards should be applied. AFRICOM headquarters’ establishment of a formal OCS organizational structure began in 2011 with three dedicated DOD civilian personnel to oversee OCS activities in Africa. Since that time, several notable OCS milestones have been achieved, such as the establishment of an OCS Integration Cell for the command’s headquarters in early 2014. According to Joint Publication 4-10, an OCS Integration Cell should be established as a permanent cell at the geographic combatant command level. The primary function of the cell is to conduct general OCS planning, provide advice, promote coordination, and maintain a common OCS operating picture. In 2015 AFRICOM headquarters expanded its OCS Integration Cell from three dedicated civilian personnel to five who work alongside two preexisting OCS planners from the Defense Logistics Agency’s Joint Contingency Acquisition Support Office, according to AFRICOM officials. AFRICOM directorates also participate in facilitating OCS. For example, AFRICOM’s personnel directorate is tasked with maintaining personnel accountability throughout the area of responsibility. AFRICOM also has boards and working groups dedicated to coordinating OCS activities, including a board that synchronizes and optimizes OCS during all phases of operation. Participants in these boards and working groups include members of the OCS Integration Cell, such as the Joint Contingency Acquisition Support Office planners assigned to the combatant command, as well as staff from the combatant command, service component commands, and other combat support agencies. AFRICOM’s OCS Integration Cell has also taken steps to manage OCS at its subordinate commands by conducting staff assistance visits to enhance OCS capabilities. Specifically, since March 2014, AFRICOM OCS Integration Cell officials have conducted staff assistance visits to all of AFRICOM’s service component commands as well as to Special Operations Command Africa and Combined Joint Task Force-Horn of Africa. The purpose of these visits is to assist and partner with subordinate commands to ensure that current AFRICOM processes, policies, tools, and procedures for OCS align with DOD policy and joint doctrine, and to enhance mission execution. Additionally, staff assistance visits help AFRICOM officials to gain a better understanding of OCS procedures and challenges at the subordinate commands. Common observations that have been made during such visits include noting a lack of standardized OCS organization at the subordinate command levels, noting constrained subordinate command resources for managing OCS, and noting a lack of OCS training. AFRICOM officials stated that they intend to work with the subordinate commands to address these issues. However, these officials did not provide details or a plan for how they were to work with the subordinate components. AFRICOM’s OCS Integration Cell also plays a key role in planning for OCS by working with operational planners to incorporate OCS concepts into planning documents. For example, according to officials, the OCS Integration Cell assisted AFRICOM’s planning directorate officials in developing OCS annexes for several concept plans, as well as the theater and regional campaign plans. These OCS annexes generally contained key considerations discussed in Joint Staff and other DOD guidance related to OCS, such as force protection, host nation agreements, and contractor oversight. AFRICOM OCS Integration Cell officials also developed a template for subordinate commands to use as they begin to develop their own OCS planning documents, including OCS annexes to plans and orders when needed. The OCS planning template is meant to guide AFRICOM subordinate commands in OCS planning for operations, exercises, security cooperation activities, and other initiatives. It contains 20 steps, including identifying mission length and requirements, leveraging historical contracts, developing acquisition and cross-servicing agreements, researching vendors, and identifying other factors that may impact OCS. With the exception of U.S. Army Africa, AFRICOM’s subordinate commands, including service component and joint force commands, generally lack formal organizational structures with personnel dedicated to OCS. Joint Publication 4-10 notes that OCS Integration Cells can be established at the service component and subordinate joint force command levels. Specifically, Joint Publication 4-10 indicates that each service component command determines, based on specific operational requirements, whether it should establish an OCS Integration Cell. For details about the functions of OCS Integration Cells at the combatant command, component command, and joint force command levels, see figure 4. U.S. Army Africa has established an organizational structure with dedicated full-time OCS personnel employed within the logistics directorate. This OCS Cell works with planners to incorporate OCS concepts into planning documents. U.S. Army Africa’s OCS Cell has also issued OCS guidance, which outlines OCS roles and responsibilities, including those related to development of contract requirements; OCS programs and procedures; and intra-theater cooperation with other component commands. Additionally, U.S. Army Africa’s dedicated military contracting element—414th Contracting Support Brigade—provides OCS planning assistance to U.S. Army Africa. While U.S. Naval Forces Africa, U.S. Air Force Africa, and U.S. Marine Corps Forces Africa do not have formalized OCS organizational structures, all of the service component commands participate in AFRICOM’s OCS-related working groups. In addition, Marine Corps Forces Africa has designated one official in its logistics directorate to serve as the component command’s OCS chief, but according to Marine Corps officials, this official is also responsible for OCS policy and coordination in the European Command area of responsibility. U.S. Air Force Africa and U.S. Naval Forces Africa have not established OCS organizational structures or dedicated OCS personnel. While joint staff guidance does not require the establishment of an OCS Integration Cell or dedicated OCS personnel at the service component command level, joint staff has emphasized the benefits of using an OCS Integration Cell to synchronize requirements and coordinate contracting actions. Moreover, Navy contracting officials stated that contract requirements are not always clear, and if they do not have the statements of work from the requiring activity at an early stage, they may not know what is needed before the forces arrive on the ground. As a result, operations might be delayed or might be carried out without sufficient equipment or support services. An OCS Integration Cell or similar structure with dedicated OCS personnel at the other service component commands could help identify contract support issues early in the requirements development process, thereby enabling the commands to avoid delays. For example, U.S Army Africa’s dedicated OCS personnel and the 414th Contracting Support Brigade have corrected potential issues regarding poor coordination of contracts. U.S. Army Africa officials stated that an Army unit may fail to inform contracting officials about what equipment they are bringing along on deployment, or may disregard training on how to properly draft a performance work statement. The 414th Contracting Support Brigade will incorporate these missing details into the contracts based on previous experience with units requiring similar services or goods. U.S. Naval Forces Africa, U.S. Marine Corps Forces Africa, and U.S. Air Force Africa logistics officials stated that one of their challenges is the limited number of personnel who are available to be dedicated to OCS. In our February 2015 report on high risk issues, we reported that DOD continues to face capacity shortfalls for addressing OCS issues. Funding and staffing constraints may prevent the establishment of an OCS Integration Cell at the service component command level. Joint Publication 4-10 recognizes that the establishment of an OCS Integration Cell or similar structure at the service component command level will vary based on specific operational requirements. Joint guidance also indicates that there is no set structure or size for an OCS Integration Cell at any command level; size and configuration are mission-dependent, and a cell could be as small as two individuals or could be significantly larger, depending on the operation. Air Force Africa and Navy Africa officials also cited a lack of service guidance as an impediment to establishing an OCS organizational structure. An OCS Integration Cell was established at AFRICOM’s subordinate joint force command supporting Operation United Assistance in 2014, but not at the Combined Joint Task Force-Horn of Africa at Camp Lemonnier, Djibouti, or Special Operations Command Africa. Joint Publication 4-10 indicates that an OCS Integration Cell is normally established at the subordinate joint force command level, where it leads general OCS planning, provides advice, promotes coordination of OCS matters, and maintains an OCS common operating picture, among other tasks. As of January 2015, according to officials, Combined Joint Task Force-Horn of Africa had two officials in the contingency contracting office and one logistics official who received OCS training. Officials further stated that Combined Joint Task Force-Horn of Africa plans to dissolve its contingency contracting office entirely by the end of fiscal year 2016, as it transitions to a more enduring presence at Camp Lemonnier, Djibouti. DOD officials stated that there are plans to create two dedicated OCS positions within the Combined Joint Task Force-Horn of Africa’s logistics directorate. As dedicated personnel, these officials could carry out some OCS coordinating tasks and could frequently communicate with logistics, operations, and planning officials about OCS issues. However, an OCS Integration Cell would provide a permanent centralized coordination point within the joint environment at Camp Lemonnier. Furthermore, this type of coordination will be increasingly important in the future, as DOD develops a more enduring joint presence at Camp Lemonnier. At the subordinate joint force command level, joint guidance provides that an OCS Integration Cell should normally exist, depending on mission conditions, but the guidance does not specify those conditions. As a result, there is no clear guidance directing Combined Joint Task Force- Horn of Africa or Special Operations Command Africa to establish an OCS Integration Cell. An OCS Integration Cell or similar structure would be particularly useful in a joint environment due to the number of military service contracting elements operating at one location with potentially limited resources. According to Combined Joint Task Force-Horn of Africa contracting officials, applying OCS concepts such as establishing an OCS Integration Cell at Camp Lemonnier would be helpful to avoid duplication of work and increased costs. For example, there are multiple car rental contracts for different service components located at Camp Lemonnier, Djibouti. Combined Joint Task Force-Horn of Africa contracting officials stated that if they had all contracted with one vendor they would have gotten a better price. As another example, officials at Camp Lemonnier stated that some military service tenants have arrived without previously informing the Navy as the Lead Service for Contracting at Camp Lemonnier about the number of contractor personnel accompanying them. Without prior coordination, Navy officials do not know how much housing, food, and fuel they require at the base. Special Operations Command Africa officials stated that the contract requirements development process can be haphazard, and that logistical planners are often brought into the process later than necessary. An OCS organizational structure or dedicated OCS personnel could ensure that OCS requirements are considered and clarified well before operations have begun. In May 2013 AFRICOM developed and began utilizing OCS Readiness Scorecards to systematically assess its subordinate commands’ progress in managing OCS. The scorecards document how well OCS is being incorporated into the subordinate commands’ planning documents, organizational structures such as boards and working groups, training exercises, and accountability processes. To that end, the scorecards comprise 32 OCS tasks divided among four broad categories: (1) compliance (planning, administration, and training); (2) boards and working groups; (3) exercises; and (4) common operating picture/SPOT. The command’s OCS Integration Cell rated each subordinate command as being either compliant, partially compliant, or non-compliant for each component task, based on OCS Integration Cell officials’ determination about the subordinate command’s progress toward completing that task. For example, according to U.S. Army Africa’s May 2014 OCS readiness scorecard, personnel had received training for contracting officer’s representatives prior to their deployment, so it was rated as compliant for this task. According to AFRICOM officials, the scorecards provide an overall picture of subordinate commands’ OCS capabilities, promote collaboration on OCS issues, and identify timely OCS training requirements and activities. However, we found that AFRICOM’s assessments do not fully reflect the extent to which subordinate commands have completed or not completed OCS tasks. Specifically, in three of AFRICOM’s scorecard assessments it listed the subordinate command as complying with a task, when the subordinate command had not accomplished that task. For example, one component task is to incorporate OCS-specific tasks into exercises. AFRICOM assessed Air Force Africa as compliant on this task. However, Air Force Africa officials stated that they did not participate in any exercises during the assessment time period, so they did not incorporate OCS tasks into exercises. In another example, Special Operations Command Africa officials stated that they had processes for awarding contracts, but they lacked standard processes for incorporating OCS into planning and requirements determinations. However, AFRICOM listed Special Operations Command Africa as possessing standardized processes for support units and activities because the scorecard does not specify that these processes should specifically involve the incorporation of OCS into planning and requirements development. Furthermore, AFRICOM applied different standards of compliance to different subordinate commands. For example, one component task focuses on contractor accountability through the utilization of SPOT, a contractor personnel accountability database. AFRICOM assessed Marine Forces Africa as partially compliant with its contractor accountability guidance, and all other components as fully compliant. However, AFRICOM officials stated that no subordinate command had processes in place to ensure that contractor personnel were being included in SPOT—a contractor accountability database—at the time of the assessment, and that AFRICOM contractor personnel accountability guidance was unclear about when the commands were required to account for contractor personnel, and what types of contractor personnel to include. While AFRICOM did not penalize the subordinate commands for non-compliance when the guidance they need to comply with is unclear, the scorecards did not clarify how to assess subordinate commands under these conditions. As a result, the assessment officials inconsistently applied the assessment standards. According to GAO’s performance measure evaluation guidance, performance measures should be objective and performance data should be accurate and consistent. According to AFRICOM’s OCS Instruction, AFRICOM logistics officials are tasked with conducting periodic reviews or inspections of AFRICOM staff and component commands to ensure compliance with required OCS tasks. In addition, Standards for Internal Control in the Federal Government state that control activities need to be established to monitor performance measures and indicators, and that these activities should be effective and efficient in accomplishing objectives. Moreover, according to GAO best practices in performance measurement, agency officials need to ensure that performance data are complete, accurate, and consistent enough to document performance and support decision-making at various organizational levels. Further, to the greatest extent possible, performance goals and measures should not require subjective considerations or judgments to dominate the measurement. AFRICOM’s assessments do not accurately reflect subordinate command OCS capabilities in that its assessment standards are unclear. In explaining the errors contained within the scorecards, AFRICOM officials who conducted the scorecard evaluations stated that they gave the subordinate commands the benefit of the doubt for certain tasks, particularly those for which guidance was unclear. AFRICOM officials further stated that the scorecard process is maturing, that there are several component tasks in the scorecards that they realize need improvement or clarification, and that the future Theater Campaign Plan will provide clearer explanations of the component tasks in terms of compliance and OCS planning. However, AFRICOM officials stated that they are in the process of updating the Theater Campaign Plan and did not give a timeframe for completion. Without clearly defined assessment standards for the scorecard, AFRICOM cannot accurately assess the OCS actions taken by subordinate commands. The lack of accurate assessments threatens the integrity of those measures and undermines their value in promoting progress in OCS management. Furthermore, AFRICOM officials stated that they plan to incorporate OCS scorecards into the Defense Readiness Reporting System, the system used to gauge readiness across DOD. Without improvements, either full or partial incorporation of the scorecard assessments into DOD’s readiness reporting system could compromise the integrity of that system by capturing an inaccurate picture of OCS readiness. Contractors in Africa provide a variety of services in support of U.S. military operations, such as transportation, construction, and food services, and officials expect the number of contractor personnel on the continent to increase. However, AFRICOM does not have a complete picture of the number of contractor personnel supporting its operations in the region. AFRICOM uses two primary sources—daily personnel status reports and SPOT, a DOD contractor personnel accountability database—to collect contractor personnel accountability information, but neither source provides comprehensive accountability or visibility of DOD contractor personnel on the continent because the total number of local national contractor personnel are not being included in either, and because the numbers of U.S. citizen and third country national contractor personnel vary between the two. According to SPOT data, which account for some but not all contractor personnel, the number of contractor personnel in Africa is more than 1,130—a nine-fold increase since April 2011. DOD contractor personnel perform a variety of functions and services to support mission needs, including transportation, linguistics, engineering, construction, cleaning, and food services. For example, in November 2014 AFRICOM reported approximately 300 U.S. contractor personnel included in SPOT as supporting Operation United Assistance in Senegal and Liberia. The contractor services primarily consisted of construction and engineering of training and medical units, and basic life support for troops, such as food and janitorial services. AFRICOM also reported approximately 350 U.S. contractor personnel at Camp Lemonnier, Djibouti, in October 2014 through its daily personnel status reports. These contractor personnel performed a number of functions, including transportation, engineering, and construction. AFRICOM officials expect the number of contractor personnel on the continent to continue to increase. Plans for expansion at Camp Lemonnier in 2015 include numerous projects to increase capacity such as expansion of emergency troop housing, taxiway extension, living and working quarters, aircraft loading areas, and upgrades to existing power plant capabilities. Although AFRICOM officials stated that they expect the number of contractors at Camp Lemonnier to decrease over time as construction is completed, they also stated that they expect the total number of contractor personnel on the African continent to continue to increase and shift from location to location as missions evolve, particularly with regard to special operations. For example, officials from Special Operations Command Africa stated that they are experiencing a significant increase in contracting actions for Combined Joint Task Force- Horn of Africa and that plans are in place to add 10 contingency contracting officers to Special Operations Command Africa for fiscal year 2016. Officials stated that it is extremely important to integrate OCS into planning for the base’s expansion. Due to the austere environment, officials stated that they intend to rely on contracted solutions because they are more cost-effective than using military personnel. AFRICOM generally accounts for U.S. citizen contractor personnel, but it accounts for third country and local national contractor personnel inconsistently. AFRICOM uses two sources to capture accountability information on contractor personnel supporting DOD operations—daily personnel status reports from the services, and SPOT. According to AFRICOM guidance, military service components, joint task forces, sub- unified commands, and forward operating sites in the AFRICOM area of responsibility are responsible for accounting for all military, DOD civilian, and DOD contractor personnel assigned, attached to, or under their operational control, and they must submit a daily personnel status report to AFRICOM personnel officials via email. This information is also to be inputted into military service personnel accountability systems. DOD’s personnel accountability system—Joint Personnel Accountability Reconciliation and Reporting (JPARR)—receives information from various sources, including service-specific accountability databases and SPOT, and generates Joint Personnel Status Reports. AFRICOM personnel officials stated that they have not yet implemented JPARR and are instead using daily personnel status reports from the services via email and SPOT to manually develop Joint Personnel Status Reports and account for contractor personnel. For a depiction of both AFRICOM’s daily personnel status report and SPOT accountability processes, see figure 5. AFRICOM’s daily personnel status reports generally account for U.S. citizen contractor personnel, but they account for third country and local national contractor personnel inconsistently. AFRICOM officials generally rely on daily personnel status reports from service components via email to provide visibility on the number of personnel on the continent, including the number of contractor personnel. These reports generally account for U.S. contractor personnel, but they do not consistently include local or third country national contractor personnel, or certain U.S. citizen contractor personnel not directly assigned to a U.S. installation. For example, the 414th Contracting Support Brigade in Uganda accounts for local and third country national contractor personnel in its daily personnel status report, but daily personnel status reports from the Air Force in Niger do not. In addition, AFRICOM’s daily personnel status reports do not include some U.S. contractor personnel who are not directly working on U.S. installations in a given country. For example, personnel recovery contractor personnel in Niger are not being accounted for in daily personnel status reports because, according to officials, they do not live on the installation. DOD guidance indicates that combatant commands are to establish policies and procedures to account for contractor personnel. Joint Publication 4-10 indicates that a key to success in contractor management is for geographic combatant commands and subordinate joint force commands to establish clear, enforceable, and well-understood contractor personnel accountability policies and procedures early in the planning stages. The guidance notes that the supported commands must work closely with service components to ensure that proper contract and contractor management oversight is in place. The guidance further states that contractor personnel visibility and accountability are essential to determine and provide the needed resources for government support requirements such as life support, force protection, and personnel recovery in uncertain or austere operational environments. Moreover, Chairman of the Joint Chiefs of Staff policy and guidance on personnel accountability states that the joint personnel status reports developed by combatant commanders are meant to satisfy the commander’s information needs and to authenticate the total number of personnel, including military, DOD civilian, and DOD contractors, who are physically present in a geographic combatant commander’s area of responsibility. Under the policy and guidance, the geographic combatant commander is responsible for reporting the total number of personnel, including DOD contractors, via joint personnel status report to the Joint Staff personnel directorate daily via secure website. However, this guidance is unclear as to what types of contractors should be accounted for in the joint personnel status report. The Chairman of the Joint Chiefs of Staff guidance identifies SPOT as the designated web- based contractor database and lists the types of contractor personnel included in SPOT—including U.S., local national, host nation, and third country national—but it does not clearly specify the types of contractor personnel to report in the daily personnel status report. AFRICOM officials stated that they interpreted this guidance as directing them to account only for U.S. citizen contractor personnel in the daily personnel status reports. Conversely, service component officials at some forward operating sites included both third country and local national, as well as U.S. citizen, contractor personnel in their personnel status reports sent to AFRICOM. AFRICOM guidance indicates that personnel status reports should include all military, civilian, and contractor personnel assigned, attached, or under the operational control of the service component, sub- unified command, joint task force, or forward operating site commander. However, AFRICOM guidance does not specify whether these contractor personnel include third country or local national contractor personnel such as linguists, or U.S. citizen contractor personnel who are under the operational control of the local commander but do not live on the installation. In addition, AFRICOM officials stated that while they receive daily personnel status reports from all of the services, they do not have complete information on how each of the services is collecting this information. Without consistently recording the number of contractor personnel in personnel status reports, AFRICOM cannot comprehensively account for contractor personnel on the continent. Thus, revising the Chairman of the Joint Chiefs of Staff and AFRICOM policy and guidance on personnel accountability to clearly specify the types of personnel to be accounted for in the joint personnel status report would provide AFRICOM with better assurance that its staff consistently provides it with comprehensive information. We found that not all contractors consistently accounted for their employees in SPOT. Specifically, contractors at two of the three sites we visited are not reporting third country national and local national employees in SPOT. For example, the contractor that manages the primary logistics contract in Uganda reported U.S. citizen, third country, and local national contractor personnel in SPOT in January 2015, but another contractor that manages the primary base services contract at Camp Lemonnier, Djibouti, accounted for U.S. citizen, third country, and local national contractor personnel only in its company systems and not in SPOT. DOD contracting officials subsequently modified the primary base services contract for Camp Lemonnier to include a SPOT accountability requirement specific to Djibouti. Officials from the primary base services company at Camp Lemonnier stated that they have plans to report U.S. citizen and third country national employees in SPOT, but they did not provide a timeframe for doing so. In addition, according to Camp Lemonnier officials, construction contractors building facilities at the site do not account for their U.S. citizen, third country, and local national contractor personnel in SPOT and do not have plans for doing so. Various provisions in guidance cover the use of SPOT to account for contractor personnel. In December 2014 AFRICOM issued an interim policy for the use of SPOT to account for and maintain visibility of contractor personnel in the AFRICOM area of responsibility. The guidance states that AFRICOM requires the use of SPOT, or its successor, to account for contractor personnel in all phases of operations. It states that SPOT is to be used for all U.S. citizens and third country nationals deploying to the area of responsibility, regardless of contract dollar amount. SPOT is also to be used to report an aggregate count of all local nationals on a monthly basis for contracts employing these personnel with periods of performance longer than 30 days. AFRICOM’s OCS annex to the theater campaign plan indicates that SPOT should be utilized to account for contractor personnel deploying to the area of responsibility and that contracting agencies will direct contractors to input required information, although it does not specify what type of contractor personnel should be registered in SPOT in these provisions. In addition, guidance issued by DOD directs the use of clauses that would require contractors to use SPOT to account for certain contractor personnel supporting DOD operations in Djibouti and in Operation United Assistance. However, we found that many of the contracts identified by DOD as involving performance in Djibouti or in support of Operation United Assistance did not contain the clauses. Although DOD guidance directs the inclusion of the clauses for new contracts providing supplies or services in Djibouti and for Operation United Assistance, and modification of existing contracts to the extent feasible, in some cases there may be a good reason for omitting the clause. For example, if the contract is for hotel lodging, it may not make sense to have hotel employees registered in SPOT. In other contracts where the personnel will be directly supporting operations at DOD facilities, such as those for construction or base support, requiring the contractor to register its personnel in SPOT would help AFRICOM to identify who is supporting its operations and for whom it may be responsible in the event of an emergency. AFRICOM personnel accountability guidance is unclear. As stated earlier, Chairman of the Joint Chiefs of Staff guidance does not specify what types of contractor personnel, such as U.S. citizen, third country national, or local national, should be included in joint personnel status reports. In addition, there are multiple provisions in guidance issued by different sources, requiring the registration of different types of employees in SPOT in various circumstances. Furthermore, AFRICOM’s interim guidance generally requiring all contractor personnel in its area of responsibility to be accounted for in SPOT expires in December 2015. AFRICOM officials stated that they intend to request that this interim guidance be included in an updated version of the Defense Federal Acquisition Regulation Supplement. According to DOD officials, as of October 2015, AFRICOM was in the final stages of drafting proposed language to this effect. However, as of November 2015, there have been no changes. In March 2014, the AFRICOM commander stated that the command is responsible for helping to protect U.S. personnel in Africa. Without clear accountability guidance, AFRICOM cannot consistently or comprehensively determine how many contractor personnel support DOD operations in the region. Without clarification and deconfliction of guidance, AFRICOM and its subordinate commands will not know what types of contractor personnel to include in personnel status reports and in SPOT, and they could continue to account for these personnel inconsistently. As a result, commanders are at risk of not having comprehensive visibility over who is supporting DOD operations in the area of responsibility. In addition, the AFRICOM area of responsibility lies in an increasingly high threat environment. The Department of State has identified 15 high threat posts on the continent. Without comprehensive and consistent contractor personnel accountability, commanders may be unaware of whom they are responsible for in the event of an emergency. AFRICOM conducts some limited vetting of potential contractors, also referred to as vendors, but it has not established a foreign vendor vetting process or cell that would preemptively identify vendors who support criminal, terrorist, or other sanctioned organizations. Additionally, in efforts to conduct individual contractor employee screening, AFRICOM sites we visited used different types of background investigations to determine the trustworthiness of contractor employees with access to DOD facilities. However, these AFRICOM forward operating sites were not incorporating additional screening measures, such as biometric screening or counterintelligence interviews, according to the specific risks at each site. As a result, AFRICOM is at risk of not exercising the appropriate level of vendor vetting or contractor employee screening on the African continent. AFRICOM conducts some limited vendor vetting, but it has not established a foreign vendor vetting process. The Federal Acquisition Regulation and DOD guidance require contracting officers to ensure that they are not contracting with any prohibited entities by vetting potential vendors’ names against a list of prohibited, restricted, or otherwise ineligible persons or entities in the System for Award Management. The System for Award Management is a database used during the procurement process that, among other things, provides information on parties that are prohibited or restricted from receiving federal contracts. In addition to information added by DOD, the database includes entities identified by the Department of Treasury’s Office of Foreign Assets Control, Department of State, and other U.S. agencies. All of the AFRICOM service components with whom we conducted interviews stated that their contracting officials do check the prohibited entities list in the System for Award Management. However, as noted by Joint Publication 4-10 in a related discussion, checking certain lists alone may be insufficient. Specifically with respect to two lists maintained by the U.S. Department of Treasury, Joint Publication 4-10 indicates that the department has traditionally designated only umbrella organizations. In addition, officials from all of AFRICOM’s service components stated that in their contracting process they use a list of vendors vetted by the local U.S. embassy. However, U.S. Embassy and Department of State headquarters officials stated that vendors on these lists are not vetted for security risks or to determine whether they are connected to or supporting any prohibited organizations; rather, they are vetted only for their ability to provide the required services. As a result, AFRICOM’s current vetting process is limited in its ability to ensure that DOD is not funding prohibited organizations in high risk areas. Current DOD guidance is not clear on what vendor vetting steps or process should be established at each combatant command to mitigate the risk of contracting with terrorist or other prohibited organizations. Joint Publication 4-10 discusses the benefit of establishing a cell, when circumstances warrant, to vet foreign vendors for possible security concerns and avoid awarding contracts to companies that have ties to insurgents. However, the guidance does not require the establishment of a vendor vetting cell or specify under what conditions it would be appropriate. Two combatant commands—U.S. Transportation Command and U.S. Central Command—have established foreign vendor vetting cells for this purpose. Although AFRICOM does not have its own foreign vendor vetting cell, during AFRICOM’s most recent contingency operation—Operation United Assistance—AFRICOM requested assistance from U.S. Transportation Command’s foreign vendor vetting cell to provide threat assessments on all of the vendors that had been awarded AFRICOM contracts. Although U.S. Transportation Command was able to take on this additional workload, AFRICOM does not have a written agreement to use that command’s vetting capability, and it may not be able to leverage U.S. Transportation Command’s capabilities for future contingencies. In addition, AFRICOM may not be fully prepared to avoid contracting with vendors who may also be supporting the enemy should it become actively engaged in hostilities. In response to recent statutory provisions regarding contracting with the enemy, DOD has issued guidance for geographic combatant commands (with the exception of U.S. Northern Command) to identify persons or entities who directly or indirectly provide funds, including goods and services, received under covered contracts to person or entities that are actively opposing U.S. or Coalition forces in a contingency operation in which the armed forces are actively engaged in hostilities. According to DOD officials, the list of identified persons and entities is integrated into the System for Award Management. As noted above, the guidance also requires contracting officers to check this database to ensure that contracts are not awarded to prohibited or restricted persons or entities. According to officials, U.S. Central Command is the only combatant command that is currently required to proactively identify these vendors, organizations, and people and add them to the prohibited entities list because it is the only combatant command currently engaged in hostilities in a contingency environment. Although AFRICOM does not currently have any declared contingency operations involving active hostilities, it operates in a high threat environment in which hostilities could quickly arise. Without having a foreign vendor vetting process in place, it will be difficult for AFRICOM to recognize and thereby avoid instances of contracting with the enemy. AFRICOM officials agree that a foreign vendor vetting process would reduce the risk that they would contract with prohibited entities, and they have drafted guidance on establishing a foreign vendor vetting cell. However, AFRICOM officials stated that they would need specific guidance from the Office of the Secretary of Defense or Joint Chiefs of Staff specifying the conditions under which combatant commands should establish a vendor vetting cell to effectively implement this process. For a description of AFRICOM’s current vendor vetting process and how it would be supplemented by the establishment of the foreign vendor vetting cell described in its draft guidance see figure 6. AFRICOM officials also cited resource limitations, specifically a shortfall in intelligence analyst positions required for the proposed foreign vendor vetting cell, as one of the challenges it faces to establishing a foreign vendor vetting process. We have previously concluded that a risk-based approach can help agencies strategically allocate limited resources to achieve desired outcomes. In 2011 we recommended that U.S. Central Command consider a risk-based approach in identifying and vetting the highest-risk foreign vendors. In response to our recommendation, U.S. Central Command established a formalized risk-based vetting program. Its vetting process is designed to ensure that DOD funds are not used for illicit purposes and to provide a risk management tool to identify foreign vendors that have ties to the insurgency or that are involved in nefarious activities. While the establishment of a foreign vendor vetting cell may not be appropriate for all operations, published DOD guidance specifying under what circumstances and how a vetting cell should be established would better position AFRICOM and other commands to avoid contracting with the enemy in high threat areas or in the event that they become actively engaged in hostilities. AFRICOM generally operates in a high threat area of responsibility, and the screening of non-U.S. personnel entering AFRICOM facilities protects DOD personnel, equipment, and installations from acts of espionage, sabotage, or other intelligence activities. According to agency officials and our observation of contractor personnel screening at the sites we visited, all of the sites conduct some type of background investigation to screen non-U.S. contractors, but Camp Lemonnier, Djibouti—the largest and most enduring site in the AFRICOM area of operations—has the most comprehensive screening process (see table 1 below). The other sites that we visited have incorporated additional screening measures to varying degrees. AFRICOM officials stated that all DOD sites in the AFRICOM area of responsibility conduct background investigations for non-U.S. contractors who require base access. All three sites that we visited use background investigations to determine the trustworthiness of third country and local national contractor employees who have access to DOD facilities. Background investigations at the sites we visited generally consist of gathering the contractor employee’s biographical information and crosschecking it with U.S. government intelligence sources and against local criminal records. However, the utility of these types of background investigations is limited by the quality of (1) the biographic information provided by the contractors, and (2) local government records, upon which the investigations are based. For example, according to base access officials, Djiboutian local national contractors are often able to provide the year, but not the specific day and month, in which they were born. In addition, at the sites we visited, DOD and Department of State officials stated that local government criminal records in Africa may not be easily searchable or well-maintained. As a result, additional contractor employee screening measures may be warranted. In light of the limitations of background investigations, additional contractor employee screening measures, such as counterintelligence interviews, biometric screening, and document and media exploitation, are incorporated to varying degrees at the three sites we visited (see table 1). For example, counterintelligence interviews with potential contractor personnel can be used to confirm the trustworthiness of the contractor employee and the information provided for the background investigation. While all of the sites we visited conduct counterintelligence interviews to varying degrees, according to DOD officials, none of them conducts interviews with all of its non-U.S. contractor personnel. Biometric screening is another measure that can be incorporated into the contractor employee screening process. The collection of biometric information, such as fingerprints or irises, unequivocally links the individual’s identity by attaching it to measurable physical characteristics. Officials can then screen this biometric information against a biometrically enabled watch list or other intelligence database. In August 2009 AFRICOM issued a Biometrics Concept of Operations stating that, to the maximum extent allowable under policy and law, persons requiring access to DOD installations in the AFRICOM area of responsibility will be enrolled into a biometrics database. However, Camp Lemonnier, Djibouti, is the only site we visited that has fully incorporated biometrics screening into its contractor employee screening process. From April to mid-December 2014 Camp Lemonnier, Djibouti, denied access to six contractor personnel after finding derogatory information on them through its biometric screening process. Security officials at Camp Lemonnier, Djibouti, also use biometric and document and media exploitation systems to upload contractor employees’ biometrics and personal documentation to the larger DOD intelligence enterprise (see figure 7). Uploading of personal documentation into U.S. intelligence databases refers to document and media exploitation, which consists of the collection, analysis, and exploitation of equipment, personal documents, and media to generate actionable intelligence. This information can then be accessed by intelligence officials if the contractor employee attempts to access another DOD site. For example, from April to mid-December 2014 security officials at Camp Lemonnier, Djibouti, placed 30 contractor employees on a watch list based on their negative activity. Moreover, with the exception of Camp Lemonnier, Djibouti, contractor employee screening processes are not well-documented in local base security guidance. Joint Publication 4-10 states that commanders must ensure that local screening and badging policies and procedures are in place for all contractor personnel requiring access to U.S. facilities. We found that Camp Lemonnier, Djibouti, was the only site of the three we visited that had clearly outlined contractor employee screening processes in local base security guidance. The Air Force’s site in Niamey, Niger, has base security guidance, but it does not describe its contractor employee screening process. One reason cited by officials for not developing local contractor employee screening procedures and policies was that AFRICOM has not yet specified what measures should be incorporated into the sites’ contractor employee screening processes. DOD Instruction 2000.12, providing guidance for the DOD Antiterrorism Program, states that combatant commands shall establish antiterrorism policies and programs for the protection of all DOD elements and personnel in their areas of responsibility. One of the minimum elements of an antiterrorism program is risk management. Antiterrorism risk management includes the determination of how best to employ given resources and force protection measures to deter, mitigate, or prepare for a terrorist incident. Those measures could include contractor employee screening measures. AFRICOM drafted contractor employee screening guidance in November 2014, which it subsequently updated. As of June 2015, AFRICOM officials stated that the guidance was under review, but they did not provide a timeframe for its issuance. This draft guidance would provide that screening measures, including biometric screening, should be in place when non-U.S. personnel have access to AFRICOM-controlled facilities. The draft guidance contains a detailed appendix that describes under what conditions biometric screening should be conducted, and it discusses document and media exploitation. However, it lacks additional information regarding when other screening measures should be implemented. Officials at the sites we visited stated that access to biometric collection equipment and counterintelligence agents constituted a challenge due to limited resources. As a result, although AFRICOM’s draft guidance would direct sites to perform additional screening measures whenever a non-U.S. contractor employee requires access to them, the sites may not have the resources to implement the guidance, when issued. Furthermore, risk varies from site to site, depending on a number of factors, such as the location’s threat profile, operations, and numbers and types of personnel. Risk-based guidance that indicates the specific measures to be incorporated into contractor employee screening processes could better position AFRICOM components to conduct the appropriate level of screening and effectively allocate screening resources to protect DOD personnel and facilities from insider threats. DOD has spent billions of dollars on contract support since 2002, and as its footprint in Africa increases it is more frequently relying on contractor support for a range of operations to provide logistical, transportation, and intelligence support to AFRICOM’s missions. The enhanced capabilities offered by OCS can be a significant force multiplier in every phase of joint and coalition operations in Africa. Conversely, the inability to effectively manage and plan for OCS could yield unintended consequences such as higher costs and inadvertent contracting with vendors that have ties to violent extremist organizations that could complicate or even undermine operational objectives. While AFRICOM has taken steps to manage and plan for OCS, challenges remain in areas such as development of OCS structures, assessments of subordinate command capabilities, accounting of the total number of contractor personnel, and contractor vetting. Several of AFRICOM’s subordinate commands—service component and joint force commands—lack organizational structures, such as OCS Integration Cells, with dedicated personnel to manage and plan for OCS. A structure such as an OCS Integration Cell would be particularly useful in a joint environment, such as Combined Joint Task Force-Horn of Africa, due to the number of military service contracting elements operating at the same location, with potentially limited resources. Additionally, clearly defined assessment standards could help AFRICOM to more accurately assess the OCS actions taken by subordinate commands. Furthermore, AFRICOM cannot comprehensively account for DOD contractors on the continent because AFRICOM and joint staff guidance is unclear regarding the types of contractors who should be accounted for and by which personnel accountability process. Clear guidance could help AFRICOM to determine how many contractors support DOD operations in the region, providing commanders with greater visibility over who is supporting DOD operations in the area of responsibility. Also, AFRICOM has not established a foreign vendor vetting process to preemptively identify vendors that support criminal, terrorist, or other sanctioned organizations, because there is no DOD guidance specifying conditions under which combatant commands should have a vendor vetting process or cell in place to determine whether potential vendors actively support any terrorist, criminal, or other sanctioned organizations. Moreover, there is no guidance clarifying when combatant commands should develop procedures for transmitting names of vendors identified through such a vendor vetting process for inclusion in prohibited entities lists. Finally, not all of the AFRICOM forward operating sites we visited are incorporating additional screening measures according to the specific risks at each site. The development of a foreign vendor vetting process and risk-based employee screening measures could help AFRICOM to determine appropriate levels for vendor vetting and contractor employee screening on the African continent. We recommend the following seven actions. To enable AFRICOM’s component commands to better plan, advise, and coordinate for OCS, we recommend that the AFRICOM Commander, as part of AFRICOM’s ongoing efforts to update related guidance and emphasize the importance of OCS integration at the subordinate command level, take the following actions: Direct the service components to designate elements within their respective staffs to be responsible for coordinating OCS, and consider the establishment of an OCS Integration Cell or similar structure with these dedicated OCS personnel, as needed. Clarify under what conditions a subordinate joint force command, such as Combined Joint Task Force-Horn of Africa, should establish an OCS Integration Cell. To enable AFRICOM to better identify, address, and mitigate OCS readiness gaps at its component commands before inaccurate information is incorporated into formal defense readiness reporting systems, we recommend that the AFRICOM Commander take the following action: Clarify the scorecard process, including assessment standards, for OCS Readiness Scorecards to ensure that evaluators can accurately assess subordinate commands’ OCS capabilities. To enable AFRICOM to comprehensively and consistently account for contractor personnel in Africa, we recommend that: The Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct Joint Staff to clarify what types of contractor personnel should be accounted for in its guidance on personnel status reports. The AFRICOM Commander develop area of responsibility-wide contractor personnel accountability guidance on or before December 2015, when the current guidance expires, that clarifies which types of contractor personnel should be accounted for using SPOT, and when SPOT accountability requirements should be incorporated into contracts. To ensure that combatant commands are not contracting with entities that may be connected to or supporting prohibited organizations, we recommend that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, take the following action: Develop guidance that clarifies the conditions under which combatant commands should have a foreign vendor vetting process or cell in place to determine whether potential vendors actively support any terrorist, criminal, or other sanctioned organizations, including clarifying when combatant commands should develop procedures for transmitting the names of any vendors identified through this process for inclusion in prohibited entities lists in the appropriate federal contracting databases, such as the System for Award Management. To ensure that AFRICOM applies a risk-based approach to contractor employee screening in Africa, we recommend that the Secretary of Defense take the following action: Direct AFRICOM to complete and issue guidance that specifies the standard of contractor employee screening for forward operating sites, based on factors such as the number of DOD personnel on base, type of operations, and local security threat. In written comments on a draft of this report, DOD concurred with four of our recommendations, partially concurred with two, and did not concur with our recommendation related to AFRICOM contractor personnel accountability guidance. DOD’s comments are summarized below and reprinted in appendix II. DOD also provided technical comments, which we incorporated where appropriate. DOD partially concurred with our first recommendation, that AFRICOM direct its service components to designate elements within their respective staffs to be responsible for coordinating OCS, and consider the establishment of an OCS Integration Cell or similar structure with dedicated OCS personnel. DOD stated that AFRICOM is assessing its subordinate commands’ OCS structures to determine the best way forward, and acknowledges that there are clear advantages and benefits to establishing an OCS Integration Cell at the service-component level even though there is no doctrinal requirement to do so. As we discuss in the report, while joint staff guidance does not require the establishment of an OCS Integration Cell or dedicated OCS personnel at the service component command level, joint staff has emphasized the benefits of using an OCS Integration Cell to synchronize requirements and coordinate contracting actions. This recommendation was intended to provide service component commands with the flexibility to develop OCS Integration Cells or similar structures with dedicated OCS personnel who can address these concerns. Moreover, an OCS Integration Cell or similar structure could help identify contract support issues early in the requirements development process, thereby enabling the service component commands to avoid delays in receiving needed equipment or services. Further, this designation would better ensure AFRICOM’s component commands effectively plan, advise, and coordinate OCS activities and meet the intent of the recommendation. DOD concurred with our second recommendation, that AFRICOM clarify under what conditions a subordinate joint force command, such as Combined Joint Task Force-Horn of Africa, should establish an OCS Integration Cell. In its comments, AFRICOM stated that all subordinate joint force commands should establish an OCS Integration Cell. AFRICOM further stated that it had scheduled a staff assistance visit to Combined Joint Task Force-Horn of Africa to assess and develop a plan for establishing an OCS Integration Cell. We acknowledge that AFRICOM’s statement in its comments clarifies that an OCS Integration Cell should be established at a subordinate joint force command, and believe that the scheduling of a staff assistance visit to Combined Joint Task Force-Horn of Africa is a positive first step toward establishing a cell there. However, we also believe that providing that clarification in AFRICOM guidance would better ensure the establishment of such a cell at various types of subordinate joint force commands. As we noted in the report, an OCS Integration Cell was established at AFRICOM’s subordinate joint force command supporting Operation United Assistance in 2014, but not at the Combined Joint Task Force-Horn of Africa in Djibouti, or Special Operations Command Africa. Furthermore, AFRICOM officials stated they conducted a staff assistance visit to Combined Joint Task Force-Horn of Africa in August 2015 to assess and develop a plan to establish an appropriate OCS structure there. Guidance clarifying under what conditions a subordinate joint force command should establish an OCS Cell, if fully implemented, would meet the intent of our recommendation. DOD partially concurred with the third recommendation, that AFRICOM clarify the scorecard process, including assessment standards, for OCS Readiness Scorecards to ensure that evaluators can accurately assess subordinate commands’ OCS capabilities. DOD stated that the scorecard is a command initiative designed to drive discussion about OCS issues with subordinate commands, but is not a replacement for the Defense Readiness Reporting System to report OCS. We agree that the scorecard is not such a replacement. However, as noted in the report, AFRICOM officials stated that they plan to incorporate OCS scorecards into the Defense Readiness Reporting System, the system used to gauge readiness across DOD. This recommendation was intended to ensure that OCS scorecard assessment standards are clearly defined so that AFRICOM can accurately assess OCS actions taken by subordinate commands. Without improvements, either full or partial incorporation of the scorecard assessments into DOD’s readiness reporting system could compromise the integrity of that system by capturing an inaccurate picture of OCS readiness. DOD concurred with our fourth recommendation, that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct Joint Staff to clarify what types of contractor personnel should be accounted for in its guidance on personnel status reports. In its comments, DOD stated that Chairman of the Joint Chiefs of Staff Manual 3150.13C, Joint Personnel Reporting Structure-Personnel Manual provides policy and guidance on what types of contractor personnel to account for in personnel status reports, but that additional training and amplifying local procedures issued by AFRICOM’s personnel directorate may be needed to fully implement its provisions and ensure consistent interpretation of the guidance. As we noted in the report, the Chairman of the Joint Chiefs of Staff guidance identifies SPOT as the designated web- based contractor database and lists the types of contractor personnel included in SPOT—including U.S., local national, host nation, and third country national—but it does not clearly specify the types of contractor personnel to report in the daily personnel status report, and as a result, service component officials at some forward operating sites inconsistently included various types of contractors in personnel status reports. While we believe that additional guidance provided by AFRICOM’s personnel directorate would provide further clarification on this issue, we continue to believe that clarifying Joint Staff guidance to clearly specify the types of contractor personnel to be accounted for in joint personnel status reports would provide AFRICOM better assurance that its staff consistently provides it with comprehensive information. DOD did not concur with the fifth recommendation, that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct AFRICOM to develop area of responsibility-wide contractor personnel accountability guidance on or before December 2015, when the current guidance expires, that clarifies which types of contractor personnel should be accounted for using SPOT and when SPOT accountability requirements should be incorporated into contracts. DOD stated that there is not a requirement for the Secretary of Defense to direct AFRICOM to develop this guidance. We agree that it may not be necessary for the Secretary of Defense to direct this action and accordingly adjusted the language of the recommendation directly to AFRICOM. Additionally, DOD stated it is coordinating a draft class deviation to the Defense Federal Acquisition Regulation Supplement on accountability requirements for contractor personnel performing in the AFRICOM area of responsibility, which it expects to be approved and published by December 2015. We acknowledge that this class deviation, when completed, may provide further clarity on the types of contractor personnel to include in SPOT. However, we continue to believe that AFRICOM could benefit from developing and issuing contractor personnel accountability guidance for its area of responsibility. Specifically, as we noted in the report, there have been no changes to the Defense Federal Acquisition Regulation Supplement to date on this topic, DOD has not issued a Class Deviation, and AFRICOM’s existing interim guidance expires in December 2015. Thus, we continue to believe that clear accountability guidance would enable AFRICOM to more consistently and comprehensively determine how many contractor personnel support DOD operations in the region. DOD concurred with our sixth recommendation, that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, develop guidance that clarifies the conditions under which combatant commands should have a foreign vendor vetting process or cell in place to determine whether potential vendors actively support any terrorist, criminal or other sanctioned organizations, including clarifying when combatant commands should develop procedures for transmitting the names of any vendors identified through this process for inclusion in prohibited entities list in the appropriate federal contracting databases, such as the System for Award Management. In its comments, DOD stated that OSD has established a joint working group to identify key stakeholders and develop DOD policy requiring combatant commands to develop foreign vendor vetting processes. When fully implemented, we believe these actions would meet the intent of our recommendation. DOD concurred with our seventh recommendation, that AFRICOM continue to complete and issue guidance that specifics the standard of contractor employee screening for forward operating sites, based on factors such as the number of DOD personnel on base, type of operations, and local security threat. In its comments, DOD stated that AFRICOM has completed the staffing of a draft instruction that will provide guidance and specify the standards for contractor employee screening for forward operating sites, including a risk-based approach to contractor employee screening based on applicable mitigating factors that include the type of contractor, operations and local security threat. While we have not seen an updated version of this draft guidance that includes a risk-based approach to screening, if it is fully implemented as described above, it would meet the intent of our recommendation. We are providing copies of this report to the appropriate congressional committees, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff and the AFRICOM Commander. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of our review were to determine the extent to which AFRICOM: (1) has an organizational structure in place to manage, plan for, and assess operational contract support; (2) accounts for contractor personnel in its area of responsibility; and (3) vets non-U.S. contractors and contractor employees. The focus of this review was on AFRICOM and its subordinate commands, including the service component commands, as well as Special Operations Command Africa in Europe, and the Combined Joint Task Force-Horn of Africa at Camp Lemonnier in Djibouti. Further, in terms of our review of AFRICOM contractor personnel accountability and contractor vetting, our scope included only contract support on the African continent, excluding contract support located at AFRICOM headquarters. To determine the extent to which AFRICOM has an organizational structure in place to manage, plan for, and assess operational contract support, we gathered information from and conducted interviews with officials at AFRICOM headquarters, as well as the AFRICOM service component command headquarters in Europe. To determine the extent to which AFRICOM assesses OCS capabilities, we evaluated the OCS readiness scorecards created by AFRICOM J4 (Logistics Directorate) to measure its service components’ OCS capabilities in 2014. In its assessment of the component commands, AFRICOM J4 rated each service component as compliant, partially compliant, or non-compliant with OCS tasks for four performance measure categories: planning, policy, operations/exercises/security cooperation, and administration. We evaluated the ratings that J4 applied to the service components in these categories to determine whether they were accurate. Specifically, to evaluate the accuracy of the scorecard ratings, we validated the ratings using evidence gathered from the service components and compared ratings of the service components with similar conditions against one another to determine whether they received the same ratings. To evaluate the extent to which AFRICOM plans for OCS, we reviewed OCS annexes—Annex W’s—to AFRICOM’s theater campaign plan and its regional campaign plans to determine the extent to which they included key OCS concepts outlined in DOD and joint staff guidance. To determine the extent to which AFRICOM accounts for contractor personnel, we reviewed DOD, AFRICOM, and Joint Staff personnel accountability guidance and interviewed AFRICOM J1 officials, as well as military service officials with personnel accountability responsibilities at the AFRICOM forward operating sites we visited. In addition, we reviewed contracts with place of performance in Djibouti from March 2013 to February 2014, as well as Operation United Assistance (OUA) contracts from December 2014 to February 2015 listed in the All Government Contract Spend data, to determine whether they included clauses to register employees supporting the contracts in the Synchronized Predeployment and Operational Tracker (SPOT). The different timeframes used for the selection of those contracts were based, in part, on the timing of various DOD requirements to use clauses that included SPOT-related provisions. Finally, we reviewed monthly SPOT data from 2011 to 2015 provided by AFRICOM personnel officials to determine whether the numbers of contractor personnel had increased or decreased within the past 5 years. We conducted a data reliability analysis of this information, including corroborating the data with agency officials to ensure that it is accurate, up-to-date and reasonable. We determined the data to be sufficient and reliable for the purposes of this report. To determine the extent to which AFRICOM vets contractors and contractor employees we reviewed DOD and military department contracting, anti-terrorism, and physical security guidance and compared it to information gathered from AFRICOM J2X (counterintelligence) and three forward operating sites concerning their vendor vetting and contractor employee screening processes. In addition, we observed contractor screening and base access procedures at the forward operating sites in Djibouti, Niger, and Uganda. We selected these locations based on variations in the military services represented, the types of contractors (U.S., third country, and local national), and the types of contract services provided. The information gathered from these three sites, while not generalizable to all AFRICOM sites, provides valuable insights about personnel accountability, contractor vetting, and contractor employee screening processes in the AFRICOM area of responsibility. Office of Acquisitions Management, Washington, D.C. Office of Logistics Management, Washington, D.C. Bureau of African Affairs, Washington, D.C. Bureau of Diplomatic Security, Washington, D.C. U.S. Embassy, Djibouti, Djibouti U.S. Embassy, Niamey, Niger U.S. Embassy, Kampala, Uganda We conducted this performance audit from June 2014 to December 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Carole Coffey and James A. Reynolds, Assistant Directors; Gregory Hellman; Courtney Reid; Michael Shaughnessy; Michael Silver; Amie Steele; Cheryl Weissman; and Amanda Weldon made key contributions to this report. Operational Contract Support: Actions Needed to Enhance the Collection, Integration, and Sharing of Lessons Learned. GAO-15-243. Washington, D.C.: March 16, 2015. Contingency Contracting: Contractor Personnel Tracking System Needs Better Plans and Guidance. GAO-15-250. Washington, D.C.: February 18, 2015. High Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Defense Acquisitions: Update on DOD’s Efforts to Implement a Common Contractor Manpower Data System. GAO-14-491R.Washington, D.C.: May 19, 2014. Warfighter Support: DOD Needs Additional Steps to Fully Integrate Operational Contract Support into Contingency Planning. GAO-13-212.Washington, D.C.: February 8, 2013. Operational Contract Support: Management and Oversight Improvements Needed in Afghanistan. GAO-12-290. Washington, D.C.: March 29, 2012. Afghanistan: U.S. Efforts to Vet Non-U.S. Vendors Need Improvement. GAO-11-355. Washington, D.C.: June 8, 2011. Warfighter Support: Cultural Change Needed to Improve How DOD Plans for and Manages Operational Contract Support. GAO-10-829T. Washington, D.C.: June 29, 2010. Warfighter Support: DOD Needs to Improve Its Planning for Using Contractors to Support Future Military Operations. GAO-10-472.Washington, D.C.: March 30, 2010. Contingency Contract Management: DOD Needs to Develop and Finalize Background Screening and Other Standards for Private Security Contractors. GAO-09-351. Washington, D.C.: July 31, 2009. Military Operations: DOD Needs to Address Contract Oversight and Quality Assurance Issues for Contracts Used to Support Contingency Operations. GAO-08-1087. Washington, D.C.: September 26, 2008. Military Operations: Background Screenings of Contractor Employees Supporting Deployed Forces May Lack Critical Information, but U.S. Forces Take Steps to Mitigate the Risk Contractors May Pose. GAO-06-999R. Washington, D.C.: September 22, 2006.
Since its establishment in 2008, AFRICOM has increased its footprint in Africa to support its mission—building African partner capabilities and deterring threats to regional security. In tandem with AFRICOM's growing footprint will be a continued reliance on contractors to support the command's operations. Accordingly, AFRICOM and subordinate commands must be able to plan for and integrate OCS during operations. House Report 113-446 included a provision for GAO to review OCS in Africa. This report examines the extent to which AFRICOM (1) has an organizational structure in place to manage, plan for, and assess OCS; (2) accounts for contractor personnel; and (3) vets non-U.S. contractors and contractor employees. To conduct this work, GAO evaluated AFRICOM's OCS organizational structures and conducted site visits to Djibouti, Niger, and Uganda to collect information on accountability and vetting processes. GAO selected these locations based on the types of contractor employees represented, among other factors. U.S. Africa Command (AFRICOM) has established a formal operational contract support (OCS) organizational structure at its headquarters to serve as the central coordination point to manage, plan for, and assess OCS. However, except for U.S. Army Africa, AFRICOM's subordinate commands do not have OCS organizational structures with dedicated personnel to manage OCS. Officials from AFRICOM's subordinate commands stated that applying OCS concepts would be helpful to avoid duplication of work and increased costs. One structure that the Department of Defense (DOD) has introduced is an OCS Integration Cell, an entity with dedicated personnel to provide staff integration and promote coordination on OCS issues; such a cell could help identify and address gaps at AFRICOM's commands, especially in a joint environment like Combined Joint Task Force-Horn of Africa, where contracting officials stated that some military tenants have arrived without informing responsible officials about the number of contractors accompanying them. AFRICOM has also developed a scorecard to assess OCS management capabilities at the subordinate commands against certain standards, but these assessments have not always been accurate because the standards have not been clearly defined or consistently applied. Without clearly defined assessment standards, AFRICOM cannot accurately assess the OCS actions taken by subordinate commands. AFRICOM does not have a complete picture of the number of contractor personnel supporting its operations in the region. AFRICOM uses two primary sources—daily personnel status reports and the Synchronized Predeployment and Operational Tracker, a DOD contractor personnel accountability database—to collect contractor personnel accountability information, but neither source provides comprehensive accountability or visibility of DOD contractor personnel on the continent because the total number of local national contractor personnel are not being included in either, and the numbers of U.S. citizen and third country national contractor personnel vary between the two. Without clear guidance on how to comprehensively account for contractor personnel, it will be difficult for AFRICOM to ensure that it has full visibility over who is supporting its operations. AFRICOM conducts some limited vetting of potential non-U.S. contractors, also referred to as vendors, but it has not established a foreign vendor vetting process or cell that would preemptively identify vendors who support terrorist or other prohibited organizations. AFRICOM has not yet established a foreign vendor vetting cell because while DOD guidance discusses the benefit of a cell, it does not require it or specify under what conditions it would be appropriate. Additionally, DOD sites in Africa use background investigations to determine the trustworthiness of contractor employees with access to DOD facilities. However, not all AFRICOM sites are incorporating additional screening measures, such as biometric screening or counterintelligence interviews, based on the specific risks at each site. As a result, AFRICOM is at risk of not exercising the appropriate level of vendor vetting or contractor employee screening on the African continent to protect DOD personnel from insider threats. GAO made recommendations to DOD regarding types of contractor personnel to account for, foreign vendor vetting process development, and guidance for contractor accountability and employee screening, among others. DOD generally concurred, but did not concur that AFRICOM should develop contractor accountability guidance because it was in the process of doing so. GAO continues to believe the recommendations are valid, as discussed in this report.
To a large degree, spectrum management policies flow from the technical characteristics of the radio spectrum. Although the radio spectrum spans the range from 3 kilohertz to 300 gigahertz, 90 percent of its use is concentrated in the 1 percent below 3.1 gigahertz. The crowding in this region has occurred because these frequencies have properties that are well suited for many important wireless technologies, such as mobile phones, radio and television broadcasting, numerous satellite communication systems, radars, and aeronautical telemetry systems. The process known as spectrum allocation has been adopted, both domestically and internationally, as a means of apportioning frequencies among various types of wireless services and users to prevent radio interference. Interference occurs when two or more radio signals interact in a manner that disrupts the transmission and reception of messages. Spectrum allocation involves segmenting the radio spectrum into bands of frequencies that are designated for use by particular types of radio services or classes of users, such as broadcast television and satellites. Over the years, the United States has designated hundreds of frequency bands for numerous types of wireless services. Within these bands, government, commercial, scientific, and amateur users receive specific frequency assignments or licenses for their wireless operations. The equipment they use is designed to operate on these frequencies. Appendix I provides an overview of how the major frequency ranges of the spectrum are currently used. During the last 50 years, developments in wireless technology have increased the number of usable frequencies, reduced the potential for interference, and improved the efficiency of transmission through various techniques, such as reducing the amount of spectrum needed to send information. In June 2002, for example, FCC initiated a proceeding to promote the commercial development of several undeveloped bands in the upper region of the spectrum where new uses for these bands are becoming practical due to technological developments. Nevertheless, the demand for frequencies by both government and the private sector remains high as new technologies that use spectrum are developed and used. An example of this is the unexpectedly rapid growth of mobile phone use in the United States. Subscribers of mobile phone service jumped from 16 million in 1994 to an estimated 137 million in 2002, greatly exceeding even the wireless industry’s original projections. Our nation’s approach to spectrum management evolved in response to technical developments, legislation, court decisions, and policy initiatives. The legal and regulatory framework in place today for allocating radio spectrum among federal and nonfederal users emerged from a compromise over two fundamental policy questions: (1) whether spectrum decisions should be made by a single government official or shared among several decision makers; and (2) whether all nonfederal government users should operate radio services without qualification, or if a standard should be used to license these operators. The resulting structure—dividing spectrum management between the President and an independent regulatory body— reflects both the President’s responsibility for national defense and the fulfillment of federal agencies’ missions, and the U.S. government’s longstanding encouragement and recognition of private sector investment in developing and deploying commercial radio and other communications services. The need for government regulation of the radiofrequency spectrum became apparent at the beginning of the twentieth century with the application of wireless communications to maritime safety. In 1904, President Theodore Roosevelt adopted a recommendation of an interagency board and placed all government coastal radio facilities under the U.S. Navy’s control. The first federal statute to establish a structure for spectrum management was the Radio Act of 1912, which was enacted in part to rationalize the burgeoning use of the radio spectrum. The Act required users of the radio spectrum to obtain a license, and it consolidated licensing authority with the Secretary of Commerce. Commerce’s practice was to grant licenses for particular frequencies on a first-come, first-served basis. This approach proved to be deficient, however, when the burgeoning growth of radio communications in the late 1910s and 1920s led to radiofrequency interference problems. The courts determined that the Secretary of Commerce lacked the authority under the 1912 Act to alleviate these problems by using licensing as a means of controlling radio station operations or by designating frequencies for uses or issuing licenses of limited duration. In recognition of such limitations, deliberations began in the 1920s to devise a new framework for radio spectrum management. Although there was general agreement that licensing should entail more than a registration process, there was debate about designation of the licensing authority and the standard that should govern the issuance of licenses. This debate went on over several years as the Department of Commerce convened four radio conferences (1922–25) attended by manufacturers, broadcasters, civilian and military government users, and other stakeholders to make recommendations addressing overcrowding of the airwaves. For example, at the first national radio conference in 1922, a bill was drafted that was subsequently introduced in the House of Representatives, that would have placed the issuance of licenses under the absolute discretion of the Secretary of Commerce. Subsequent bills introduced in the House and Senate in 1925 took differing approaches to licensing authority. The House bill would have vested licensing authority with the Secretary of Commerce with licensing appeals going to a commission, while the Senate bill would have placed all licensing functions in an independent commission from the start. The Radio Act of 1927 reflected a compromise on a spectrum management framework. In order to allay concerns about vesting all licensing authority in the hands of one person (specifically, the Secretary of Commerce) the new Act reserved the authority to assign frequencies for all federal government radio operators to the President and created the Federal Radio Commission (FRC) to license nonfederal government operators. Under the Act, the FRC was granted licensing authority for one year to resolve interference problems, after which it was to become an appellate body to address disputes with the Secretary of Commerce, who was to assume licensing duties. Composed of five members from five different regions of the country, FRC was empowered to assign frequencies, establish coverage areas, and establish the power and location of transmitters under its licensing authority. Further, the Act delineated that a radio operation proposed by a nonfederal license applicant must meet a standard of “the public interest, convenience, and necessity,” and that a license conveyed no ownership in radio channels nor created any right beyond the terms of the license. The FRC’s one-year authority over licensing was extended several times by the Congress because the commission needed more time to deal with interference problems. As these problems persisted, the FRC’s authority was extended for an indefinite term pending new legislation. By 1930, it was becoming evident that the licensing task was too complex to be conferred permanently on the Department of Commerce, which was perceived as being already overburdened with other issues. New legislation was enacted in the form of the landmark Communications Act of 1934. Under this Act, the FRC was abolished and its authorities transferred to the new Federal Communications Commission (FCC), which brought together the regulation of telephone, telegraph, and radio services under one independent regulatory agency. The 1934 Act, however, also retained the authority of the President to assign spectrum to and manage federal government radio operations. For over 75 years, this division in responsibilities has remained the essential feature of U.S. spectrum management, unlike many other countries that chose to concentrate spectrum management within one government entity. The President’s authority for managing federal spectrum has been lodged in various parts of the government since the 1934 Act. However, a source of advice and support on federal government spectrum use during these changes has been IRAC, composed of representatives from federal agencies that use the most spectrum. IRAC was formed in 1922 when Secretary of Commerce Herbert Hoover drew attention to the need for cooperative action in solving problems arising from the federal government’s interest in radio use. He invited interested government departments to designate representatives for a special government radio committee. The committee recommended the establishment of a permanent interdepartmental committee. As a result, the Interdepartment Advisory Committee on Governmental Radio Broadcasting (later renamed IRAC) was formed. Over the ensuing decades, IRAC, whose existence and actions were affirmed by the President in 1927, has continued to advise whomever has been responsible for exercising the authority of the President to assign frequencies to the federal government. Currently, IRAC assists NTIA in assigning frequencies to federal agencies and developing policies, programs, procedures, and technical criteria for the allocation, management, and use of the spectrum. Figure 1 shows IRAC’s present membership, which includes FCC in a nonvoting liaison capacity. Over the past 75 years, since the 1927 Act formed our divided structure of spectrum management, there is historical evidence of cooperation and coordination in managing federal and nonfederal spectrum to promote its effective use. For example, FCC and IRAC agreed in 1940 to give each other notice of proposed actions that might cause interference or other problems for their respective constituencies. Further, FCC has participated in IRAC meetings, and NTIA frequently provides comments in FCC proceedings that affect federal radio operations. As will be discussed later, FCC and NTIA also work together with the Department of State to formulate a unified U.S. position on issues at international meetings that coordinate spectrum use regionally and globally. However, as demand for this limited resource increases, particularly with the continuing emergence of new commercial wireless technologies, NTIA and FCC face serious challenges in trying to meet the growth in the needs of their respective incumbent users, while accommodating the needs of new users. As FCC has noted, the basic problem is that demand for spectrum is outstripping the supply. Since nearly all of the usable radio spectrum has been allocated already, accommodating more services and users often involves redefining spectrum allocations. The current divided U.S. spectrum management structure has methods for allocating spectrum for new uses and users of wireless services, but these methods have occasionally resulted in lengthy negotiations between FCC and NTIA. Several, including Congress, have suggested that coordinated planning could help identify and resolve some allocation difficulties. FCC and NTIA have not yet implemented long- standing congressional directives to conduct joint, national spectrum planning although they have conducted independent planning efforts and have recently taken steps to improve coordination. One method to accommodate more services and users is spectrum “sharing,” which enables more than one user to transmit radio signals on the same frequency band. In a shared allocation, a distinction is made as to which user has “primary” or priority use of a frequency and which user has “secondary” status, meaning that it must defer to the primary user. Users may also be designated as “co-primary,” in which the first operator to obtain authority to use the spectrum has priority to use the frequency over another primary operator. As shown in figure 2, more than half of the spectrum from 9 kHz to 3.1 GHz is shared between federal and nonfederal users. NTIA must ensure that the status assigned to users in shared spectrum (primary/secondary or co-primary) meets users’ needs, and that users abide by rules applicable to their designated status. Another method to accommodate new users and technologies is “band- clearing,” or reclassifying a band of spectrum from one set of radio services and users to another, which requires moving previously authorized users to a different band. Band-clearing decisions affecting only nonfederal or only federal users are managed within FCC or NTIA, respectively, albeit sometimes with difficulty. However, band-clearing decisions that involve radio services of both types of users pose a greater challenge. Specifically, they require coordination between FCC and NTIA to ensure that moving existing users to a new frequency band is technically feasible and meets their radio operation needs. In addition, such moves often involve costs to the existing user of the band, who may need to modify or replace existing equipment to operate on new frequencies. The need for spectrum reallocation can originate from many different sources, including the results of international decisions on spectrum use or requests from industry or federal users. Also, the Congress has in the past mandated the reallocation of spectrum from federal to nonfederal use. Once any needed research has been conducted and both FCC and NTIA agree on the proposed reallocation, FCC issues a “Notice of Proposed Rulemaking” to obtain public comments on the proposed allocation change. After the comment period, FCC publishes a Report and Order that directs any changes that will be made to the frequency allocation table. Spectrum users who disagree with the Report and Order may petition FCC for a change that could result in an amended decision. Figure 3 depicts the primary steps in the process by which the reallocation of a frequency band from a federal to nonfederal government designation would occur if no court challenges arise. While many such band-clearing decisions have been made throughout radio history, these negotiations can be protracted and contentious. A hotly debated issue today is how to accommodate “third-generation” wireless services, which enable handheld communication devices to provide both voice and high-speed data. In October 2000, President Clinton directed that a plan be developed to select spectrum for third-generation services, but this attempt was unsuccessful. A new task force was established. In July 2002 the Department of Commerce in conjunction with FCC, the Department of Defense (DOD), and other federal agencies released its study that concluded that 90 MHz of spectrum could be allocated for third- generation services without disrupting communication services critical to national security. This 90 MHz of spectrum could be available for third- generation services no later than December 2008 and would come from both federal and nonfederal bands. FCC told us that the relationship between FCC and NTIA on spectrum management became more structured since the Congress became active in the 1990s in directing the reallocation of spectrum from federal to nonfederal government use. For example, the Omnibus Budget Reconciliation Act of 1993 (P.L. 103-66, Aug. 10, 1993) directed the reallocation of not less that 200 MHz of spectrum from federal to private sector use. NTIA was directed to identify frequency bands that could be reallocated; use specific criteria in making recommendations for their reallocation; issue a preliminary report upon which public comment on proposed reallocations would be solicited; obtain analyses and comment from FCC; and transfer frequency bands within specified time frames. The Act also required FCC to gradually allocate and assign these frequencies over the course of 10 years. The Balanced Budget Act of 1997 (P.L. 105-33, Aug. 5, 1997) imposed a stricter deadline for NTIA to identify frequency bands for reallocation and required FCC to reallocate, auction, and assign licenses by September 2002 for an additional 20 MHz of spectrum. To deal with the protracted nature of some spectrum reallocation decisions, some officials we interviewed have suggested establishing a third party—such as an outside panel or commission, an office within the White House, or an interagency group—to arbitrate or mediate differences between FCC and NTIA. For example, the United Kingdom has a formal standing committee, co-chaired by officials from the Radiocommunications Agency and the Ministry of Defense, that has authority to resolve contentious spectrum issues. FCC officials noted, however, that an arbitration function would go to the core of the responsibilities currently entrusted to FCC and NTIA in making allocation decisions. Moreover, it is not clear how such a function would be set up or the extent to which the President, who retains spectrum management authority for government users and national defense, would allow this authority to be placed in the hands of an arbitrator. FCC officials maintain that the handful of issues involving inherently difficult reallocation choices attracts attention and leads to what, in their view, is a mistaken assumption that the current reallocation process is broken. They noted that FCC and NTIA have coordinated successfully on over 50 spectrum-related rulemakings in the past year alone. Mechanisms for ensuring that incumbent users receive comparable spectrum and are reimbursed for the cost of relocating are also being developed or proposed. The National Defense Authorization Act for Fiscal Year 2000 specified a number of conditions that have to be met if spectrum in which DOD is the primary user is surrendered. The Act requires NTIA, in consultation with FCC, to identify and make available to DOD for its primary use, if necessary, an alternate band(s) of frequency as replacement(s) for the band(s) surrendered. Further, if such band(s) of frequency are to be surrendered, the Secretaries of Defense and Commerce, and the Chairman of the Joint Chiefs of Staff must jointly certify to relevant congressional committees that such alternative band(s) provide comparable technical characteristics to restore essential military capability. Under the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999, federal agencies are authorized to accept compensation payments when they relocate or modify their frequency use to accommodate nonfederal users of spectrum. The Act directs NTIA and FCC to develop procedures for the implementation of the relocation provisions. NTIA issued a Notice of Proposed Rulemaking regarding these provisions in January 2001 and a final rule in June 2002. Under this rule, federal agencies would prepare an estimate of their relocation costs. This figure would be provided to potential bidders at future auctions. FCC has stated that the Commission will adopt any additional rules or procedures necessary to supplement NTIA’s reimbursement procedures. Under current law, however, federal agencies would be unable to expend these payments without additional congressional action. In July 2002, the Department of Commerce sent to the congressional leadership a draft bill to amend the Communications Act of 1934 to create a Spectrum Relocation Fund to revise the procedures under which federal entities are paid for relocating from spectrum frequencies reallocated for auction to commercial entities. According to NTIA, this fund would benefit both the agencies, by providing greater certainty in recovering their relocation costs, and the private sector, by providing greater certainty on the ultimate price of the licenses they obtain at auction. However, it would be important for the Congress to establish up front what controls it wants to maintain over such a fund. For example, would the Office of Management and Budget control when and how much an agency received in reimbursement or would the Congress maintain control by requiring an agency to obtain an appropriation? Several U.S. spectrum experts said that one means of improving the spectrum allocation process is to develop coordinated, national spectrum planning and policies that better anticipate future needs and put more predictability into spectrum decision-making. The Congress called for coordinated spectrum planning in the Omnibus Budget Reconciliation Act of 1993, which required NTIA and FCC to conduct joint spectrum planning sessions. Subsequently, the National Defense Authorization Act of 2000 included a requirement for FCC and NTIA to review and assess the progress toward implementing a national spectrum plan. Even before these congressional directives, NTIA itself, in a 1991 report, recommended that NTIA and FCC seek to institute a coordinated, strategic, long-range planning process. The output of this process would be a formal joint FCC/NTIA plan that would be periodically updated, with goals, policies, and specific actions to provide for future spectrum requirements and improved spectrum management. The Defense Science Board similarly concluded in November 2000 that the United States lacks a mechanism to formulate a national spectrum policy that balances traditional national security uses of the spectrum with new commercial uses of the spectrum. According to NTIA, the United States Table of Frequency Allocations, which documents the spectrum allocations for over 40 radio services, along with existing spectrum management processes, constitutes a basic U.S. strategic spectrum plan, which covers all cases of spectrum use. However, as we pointed out in an earlier report, the national allocation table reflects only the current landscape of spectrum use and does not provide a framework to guide spectrum decisions for the future. FCC and NTIA have each undertaken planning efforts, but they are focused largely on issues involving their separate constituencies and, as such, do not fulfill the requirements of the congressional directives. For example, FCC conducts spectrum planning for nonfederal government use through two staff committees and uses public forums, en banc hearings, advisory committees, and other methods to gather and provide information for its spectrum planning. NTIA’s spectrum planning has resulted in several spectrum planning documents over the last 20 years, including the September 2000 Federal Long-Range Spectrum Plan that identified current and future federal spectrum uses, along with any unsupported spectrum requirements. In addition, NTIA established the Strategic Spectrum Planning program in 1992, through which it produced several additional reports on spectrum planning, dealing with land mobile spectrum planning options, radio astronomy spectrum planning options, and federal radar spectrum requirements. Interaction between the two agencies also takes place on an ongoing basis. For example, FCC has liaison status on IRAC and its subcommittees, which provides it with an avenue for commenting on federal government issues. NTIA, for its part, provides comments on FCC proceedings on issues that could affect federal users. In addition, both agencies (along with industry) are involved in preparing the United States’ unified position for World Radiocommunication Conferences (WRCs). One FCC official called the consensus-building involved in this preparatory process as being the closest thing the United States has to a national spectrum strategy. However, FCC and NTIA officials acknowledged that these interactions have not fulfilled the congressional mandate for coordinated national spectrum planning. FCC and NTIA officials stated that a key problem in developing a strategy for national spectrum planning is the inherent difficulty of trying to predict future trends in the fast-developing area of wireless services. For example, FCC officials noted that both FCC and wireless industry forecasts greatly underestimated the huge growth of mobile phone service during the 1990s. On the other hand, emerging wireless technologies that appear promising may not develop as planned, resulting in underutilization of spectrum that has been set aside for them. The Chairman of FCC and the Administrator of NTIA recently commented on the need for coordinated planning, and the agencies are currently engaged in efforts that could provide a basis for improved planning. For example, in early 2002, FCC announced the creation of a Spectrum Policy Task Force to explore how spectrum can be put to the highest and best use in a timely manner. In July 2002, FCC received comments in response to a public notice issued for the Task Force on several spectrum management and use issues including market-oriented allocation and assignment policies, interference protection, spectral efficiency, public safety communications, and international coordination. In August 2002, the Spectrum Policy Task Force held four public workshops addressing spectrum policy issues. Participants included representatives from academia, industry, and government. The Task Force intends to report to the Commission by October 2002. For its part, NTIA hosted a spectrum summit in early April 2002 that included participants from FCC, NTIA, and federal agency and industry representatives. The summit included several sessions to explore ways to improve the management of the spectrum through planning and technological innovations. In addition, NTIA’s 2003 budget request includes over $1 million in funding to develop a plan to review and improve its overall performance of spectrum management duties. In June 2002, NTIA officials stated that FCC and NTIA had recently adopted a “One Spectrum Team” approach to improve interagency communication and take a more proactive approach to spectrum management. It remains to be seen whether a well-coordinated and clearly defined national spectrum strategy emerges from these efforts. As noted earlier, the management of our domestic spectrum has been tied to international agreements on spectrum use at regional and global levels. Domestic spectrum allocations are generally consistent with international allocations negotiated and agreed to by members of the International Telecommunication Union (ITU) at WRCs. Decisions reached at these conferences can have far-reaching implications for the direction and growth of the multibillion dollar wireless communications industry in this country and abroad. Key officials raised questions about the adequacy of the current U.S. preparatory process, in particular the use of separate processes by FCC and NTIA to develop U.S. positions, and the short tenure of the head of the U.S. delegation to the conferences. The emergence of new radio applications with international ramifications, such as broadcasting, radio navigation, and satellite-based services, has increased the need for international agreements to prevent cross-border signal interference and maximize the benefits of spectrum in meeting global needs, such as air traffic control. At the same time, the number of participating nations in international radio conferences has risen dramatically—from 9 nations in the first conference held in 1903, to 65 nations in 1932, to 148 nations in 2000—along with the frequency of conferences (now held every 2 to 3 years), and the number of agenda items negotiated at each conference (e.g., 11 in 1979; 34 in 2000). There has also been a movement toward regional alignment at WRCs. Because decisions on agenda items are made by vote of the participating countries—with one vote per country—uniform or block voting by groups of nations has emerged, as areas such as the European Union seek to advance regional positions. The Department of State coordinates and mediates the development of the U.S. position for each WRC and leads the U.S. delegation at the conference through an ambassador named by the President. We found strong agreement among those we interviewed that it is important for the United States to develop its position in advance of the conference in order to have time to meet with other nations to gain international support for our positions. U.S. positions on WRC agenda items are developed largely through separate processes by FCC and NTIA with the involvement of their respective constituencies. To obtain input from nonfederal users, FCC convenes a WRC advisory committee composed of representatives of various radio interests (e.g., commercial, broadcast, private, and public safety users) and solicits comments through a public notice in the Federal Register. NTIA and federal government users also participate in FCC’s preparatory process. To obtain the views of federal spectrum users, IRAC meets to provide NTIA with input on WRC agenda items. Although IRAC’s WRC preparatory meetings are closed to the private sector due to national security concerns, nonfederal government users may make presentations to IRAC to convey their views on WRC agenda items. In addition, the Department of State solicits input from its International Telecommunication Advisory Committee (ITAC), made up of representatives of government, scientific, and industrial organizations involved in the telecommunications sector. Any differences of opinion between FCC and NTIA on agenda items must ultimately be reconciled into a unified U.S. position. In cases where differences persist, the ambassador who leads the U.S. delegation to the conference acts as a mediator to achieve consensus on a unified U.S. position. The Department of State ultimately transmits the U.S. position on WRC agenda items to the regional organization of which the United States is a member—the Inter-American Telecommunication Commission (CITEL), which convenes prior to a WRC to build regional consensus on conference agenda items. The department also transmits the U.S. position to ITU, which sponsors the conference. Figure 4 depicts the relationship among the domestic players and these two international organizations in preparing the U.S. position for the WRCs. We obtained conflicting views on the effectiveness of the U.S. preparatory process for WRCs. Department of State and FCC officials told us that the work of FCC and NTIA with their respective constituencies and with each other in preparation for a conference leads to U.S. positions on WRC agenda items that are thoroughly scrutinized, well reasoned, and generally supported among federal and nonfederal parties. In contrast, some industry officials told us that the NTIA process does not allow the private sector adequate involvement in the development of U.S. positions for the WRC. Also, some federal and industry officials said that, because each agency develops its positions through separate processes, it takes too long to meld the two toward the end of the preparatory period. For example, in the past, the U.S. position on some items has remained unresolved until the eve of the conference, leaving the United States with little time to build preconference support for them. The former U.S. Ambassador to the 2000 WRC recommended merging the separate FCC and NTIA preparatory groups to get an earlier start at working with industry and government users to reach a consensus on U.S. positions regarding WRC agenda items. However, NTIA said that the separate processes are needed because much of the government side of spectrum policy and use is classified and because NTIA and FCC are responsible for separate groups of constituents. In June 2002, FCC, NTIA, and Department of State officials stated they believed coordination in developing U.S. positions was improving and that most of the 2003 WRC agenda items were close to resolution. There has been long-standing concern about the length of tenure of the individual who is designated head of the U.S. delegation. The President— under his authority to confer the personal rank of ambassador on an individual in connection with a special mission of a temporary nature—has selected an ambassador to head the U.S. delegation to each WRC for a time period not exceeding 6 months. This authority allows the conferral of the personal rank of ambassador to be made without confirmation by the Senate, subject to appropriate notification. The former U.S. Ambassador to the 2000 WRC said that ambassador status is generally believed to confer a high level of support from the administration, helps to achieve consensus in finalizing U.S. positions, and enhances our negotiating posture with other countries. However, the former U.S. Ambassador also said that the brief tenure of the appointment leaves little time for an ambassador to get up to speed on the issues, solidify U.S. positions, form a delegation, and undertake preconference meetings with heads of other delegations to promote U.S. positions. In addition, the Ambassador said there is concern about the lack of continuity in leadership from one conference to the next, in contrast to other nations that are led by high-level government officials who serve longer terms and may represent their nations at multiple conferences. FCC and NTIA officials stated that longer-term leaders of national delegations are perceived by other participants as being more able to develop relationships with their counterparts from other nations, and that this helps them to negotiate and build regional and international support for their positions. Similar observations were made by the Office of Technology Assessment as far back as 1991, but no consensus has emerged to resolve this issue. Department of State officials said previous administrations have identified the person who was to become the ambassador early so that they could involve that person in conference planning prior to the start of the 6-month term. For example, the 2000 WRC Ambassador knew she would be chosen for the position and was given a temporary telecommunications policy position in the White House 4 months prior to her official selection. This position provided additional time for her to learn the issues and observe WRC preparatory meetings, but she could not lead the meetings until her formal selection about 5 months before the conference. Department officials said that the current administration is also planning to identify the 2003 WRC Ambassador several months before the official selection. Other suggestions for dealing with this issue that have been raised include establishing a telecommunications policy office in the White House, whose head would also be responsible for leading the delegation; extending the length of an ambassador’s appointment through a Senate confirmation process; and creating an upper-level career position within the Department of State to provide continuity from one conference to the next and organize WRC preparations. Officials at the Department of State said that, after a WRC concludes, countries need to implement the agreements reached at the conference— known as the Final Acts. The officials said that NTIA, FCC, and the Department of State share responsibility for implementing the Final Acts in the United States. NTIA and FCC develop an implementation manual that includes all of the necessary changes in U.S. allocations, regulations, and rules. FCC must then implement the changes through its rule-making process. Meanwhile, the Department of State prepares a Memorandum of Law to transmit to the Senate along with the Final Acts of the WRC for ratification. Officials from NTIA, FCC, and Department of State said that the United States has faced timeliness challenges in implementing the Final Acts over the last 10 years. In July 2002, NTIA officials stated that federal agencies are concerned that WRC allocation decisions of interest to the private sector are often dealt with quickly, while those primarily of interest to the federal government go without action. For example, at the 1997 WRC, the United States sought and gained a primary allocation of spectrum from 5250 MHz to 5350 MHz for an earth exploration satellite service. NTIA officials stated that FCC has still not formally considered their request for a national primary allocation for this service. In addition, one agency said that it had not gained access to two channels designated for its use by the 1997 WRC due to the slowness of the FCC rule-making process. Officials from another agency said that FCC’s table of allocations is out of date because it does not reflect some of the government-specific allocation changes made at WRCs over the last 10 years. The officials said that this has led others to seek allocations on some of these bands. FCC officials told us that some changes to the U.S. allocation table resulting from the WRCs had not been made because FCC had a shortage of engineering staff required to make the changes. For this reason, they said that FCC had to prioritize WRC allocation decisions and defer those changes that they believed had the least impact on spectrum use. These officials added, however, that additional staff recently hired by FCC has allowed FCC to complete the work needed to update the allocation table, and FCC plans to initiate the necessary rulemaking actions in the near future. In addition, the FCC officials stated that they are unaware of any impact the delays have had on planned federal systems. NTIA is required to promote the efficient and cost-effective use of the federal spectrum that it manages—over 270,000 frequency assignments as of June 24, 2002—“to the maximum extent feasible.” Accordingly, as accountability measures, NTIA has directed federal agencies to use only as much spectrum as they need and has established several processes and activities to encourage efficient spectrum use. However, NTIA does not have assurances that these processes and activities are effective. NTIA and federal agency officials said that key challenges include a shortage of staff with appropriate expertise to support spectrum management activities, as well as staffing and resource problems in implementing spectrum-efficient technologies. NTIA authorizes federal agency use of the spectrum through its frequency assignment process. Before submitting a frequency assignment application, an agency must justify to NTIA that the frequency assignment will fulfill an established mission need and that other means of communication, such as commercial services, are not appropriate or available. Agencies generally rely on mission staff to identify and justify the need for a frequency assignment and to complete the engineering and technical specifications for the application. Once an application is submitted, it goes through an NTIA review and a 15-day IRAC peer review process. NTIA staff members said they examine assignment applications to ensure that they comply with technical rules, while IRAC members said they primarily look to see whether the assignment could cause interference with other users. If no one at NTIA or IRAC objects, the assignment is automatically approved and added to the Government Master File. The requester can then begin operating on the assigned frequency. Figure 5 illustrates the frequency assignment process. NTIA officials said they are not in a position to independently assess the justification for each frequency request, not only because this would require a detailed understanding of an agency’s operational needs, but also because of the high volume of assignment action requests that require attention. On average, NTIA processes between 7,000 and 10,000 assignment action requests—applications, modifications, or deletions— from agencies each month. To help agencies prepare frequency assignment applications, as well as to help NTIA staff review them, NTIA has implemented a computer-based tool, called Spectrum XXI, to automate the application process. Spectrum XXI is designed to help agencies in a number of ways. For example, Spectrum XXI allows for status tracking and editing of applications. In addition, Spectrum XXI helps in assigning users to the most heavily used channels first, rather than less heavily used ones, in order to minimize the amount of spectrum space used. NTIA officials stated that they are continuing to modify Spectrum XXI to improve the efficiency of the selection of frequencies by new users. One spectrum manager we interviewed stated that Spectrum XXI has greatly reduced the amount of time and work involved in applying for a frequency assignment. However, four of the seven agencies we reviewed were not using this tool for various reasons. For example, spectrum managers from two agencies said that their own spectrum management programs better fit their needs. NTIA’s Frequency Assignment Review Program generally requires all federal users of spectrum to review their frequency assignments every 5 years. The purpose of the reviews is to determine if the frequency assignments are still essential to meeting the agencies’ missions, justified correctly, not redundant to other assignments, and up to date. Federal spectrum users are expected to modify or delete frequency assignments as needed based on the results of these reviews. NTIA said that it may delete assignments that have not been reviewed in more than 10 years. Using its database of federal agencies’ frequency assignments, NTIA is to track assignments that are due for review and provide a listing to the respective agencies. NTIA is notified that an agency has completed an assignment review when the agency requests a modification to the database that contains the frequency assignments. These modifications may simply be requesting a change to the date on which the assignment was last reviewed or may indicate technical and operational changes made since the last review. NTIA forwards modification requests to IRAC members for their review. If no member objects to the modification, the user can continue to operate on the frequency assignment for another 5 years. NTIA has implemented additional requirements for reviews that are significantly overdue—meaning the federal agency has not reviewed the frequency assignment in over 10 years. Every 6 months, NTIA provides IRAC with a list of these overdue assignments for a case-by-case review and recommendation on whether to retain or delete the assignment. NTIA officials said this method of notification works very well in getting the reviews done because federal users recognize that it is easier to review existing assignments than it is to lose the frequency authorizations and start the process over. NTIA does not maintain any information on the number of assignments that have been deleted for noncompliance with the review program. According to NTIA officials, the Frequency Assignment Review Program “weeds out” assignments that are no longer being used so that they can be returned for use by others. We found, however, that the program relies mainly on self-reported agency information that receives no independent verification by NTIA. Comments by spectrum managers at the seven agencies we reviewed raise concerns about how well these reviews are being carried out. Officials from these agencies told us that they attempt to use spectrum as efficiently as possible, but five of them acknowledged that they are not completing the 5-year reviews in a timely or in-depth way. For example, a spectrum manager for a major agency division said that over 1,000 of its frequency assignments have not been reviewed in 10 years or more. According to agency officials, problems with performing timely assignment reviews are occurring due to shortages in qualified staff to complete the reviews and because completing the reviews is a lower priority compared to other agency work. For example, a spectrum manager at one agency noted that all field staff responsible for helping with the 5-year reviews had been eliminated, which impaired the timeliness and quality of the reviews. Another spectrum manager stated that his agency’s central spectrum management staff had operated a comprehensive program of oversight, on- sight inspections, field staff training, and planning until 8 of their 10 full- time positions were eliminated. This official said that he could not ensure all spectrum assignments are being used as authorized. The spectrum manager at another agency said that he was sure that the agency was not using all of its frequency assignments, but he added that conducting a comprehensive review would be time consuming and of limited benefit. The spectrum manager located at an agency’s field office stated that some frequency assignments connected to a single system critical to mission functions had been deleted by NTIA because the agency did not have the staff or time to complete the assignment reviews. This manager stated the agency continued to use these frequencies while staff struggled to find the time to reapply for them. Aside from the assignment review process, NTIA had established additional programs for overseeing how federal agencies were using their spectrum, but these programs have been scaled back or discontinued. One component of NTIA’s Spectrum Measurement Program used van-mounted monitoring equipment by NTIA staff to verify that federal agencies were using assigned frequencies in different geographic locations in accordance with applicable technical regulations. Although NTIA officials recently stated that this program was an invaluable monitoring tool, the van- mounted verification has been discontinued due to a lack of agency resources. Another effort that is no longer active is NTIA’s Spectrum Management Survey Program, established in 1965, which included on-site visits by NTIA staff to determine whether federal agencies’ transmitters were being used as authorized, to educate field staff on NTIA requirements, and to improve spectrum management. NTIA said that although this program helped to correct frequency assignment information and provided for an exchange of information, the program is not currently operating because of increased workloads and a shortage of staff. The issue of reported spectrum staffing shortages at federal agencies has broader ramifications for the general management of spectrum that go beyond the frequency review and monitoring programs. In January 2002, NTIA officials told us that its Office of Spectrum Management was facing serious staffing problems. Specifically, the office had 21 vacancies out of a total of 122 positions. In addition, over 40 percent of the current staff will be eligible for retirement by 2006. NTIA officials said that agencies such as FCC and the Department of State have recently had a number of openings for technical positions at higher salary levels then NTIA currently offers. As a result, their Office of Spectrum Management has lost staff to these agencies. In addition, two other agencies we reviewed have conducted staffing needs assessments that indicate that their current levels of staff are inadequate. First, an internal analysis conducted by the Coast Guard Maritime Radio and Spectrum Management Division showed an immediate need for six additional field staff members and at least one additional headquarters staff to assist with spectrum management. Second, a June 2002 study sponsored by the Department of Energy (DOE) reviewed the resources and management structure of the 12 IRAC member federal agencies that hold more than 1000 frequency assignments. Although the study’s analysis focused on agencies with large numbers of assignments, the complete study includes a description of all 20 IRAC agencies’ spectrum management organizational structures, reporting chains, and resource allocations, among other spectrum management issues. It concluded that federal and contract staffing for DOE’s spectrum management was inadequate when compared to that of other agencies, particularly with regard to planning, homeland security, and spectrum-use initiatives. Although the loss of qualified staff and the need to recruit new staff has been a source of concern for the agencies, no concerted effort has been made to define the federal government’s needs in this area or develop a strategy for addressing it. NTIA officials mentioned that they had been working with the Office of Personnel Management to consider establishing a federal job series for spectrum management in order to help attract and retain these specialists. However, they said the effort appears to have lost momentum. Addressing these perceived human capital issues may help increase accountability. However, even if these problems were addressed, it is unclear that this type of oversight management approach in itself would ensure the efficient use of federal spectrum. NTIA and FCC officials have said that incentives that encourage the efficient use of spectrum by federal users could help further increase the efficiency of the federal government’s use of spectrum. NTIA stated that it has conducted technical research and introduced a number of additional initiatives to promote the efficient use of federal spectrum, but some of these efforts face challenges related to measurement, resources, equipment, and costs. For example, NTIA’s Institute for Telecommunication Science (ITS), established in 1977, operates the primary telecommunications research laboratory in the United States involved in the development and application of radio wave propagation measurements, studies, and prediction models. ITS provides the tools, analysis, and data that enable studies of spectrum use, efficiency, coverage, and interference analysis. ITS has participated in antenna studies that may result in a substantial increase in the “carrying capacity” of a radio system (or piece of spectrum) by providing multiple beams to independently link to different users on the same channel. In addition, ITS has been assisting the public safety community in increasing spectrum efficiency by examining and implementing system improvements to support increased voice and data traffic. Working with IRAC, NTIA also strives to establish standards that are equal to or better than private sector standards at aiding in the conservation of spectrum. For example, NTIA officials have noted that federal radar standards are among the tightest radar spectrum standards in the world and are currently under review for further refinements. NTIA officials said that, when applicable, NTIA uses the definition of spectrum efficiency described by ITU, namely the ratio of communications achieved to the spectrum space used, which has practical value for many types of commercial communications systems. The specific technical measurement may take different forms, depending on the system. For example, the spectrum efficiency of a commercial wireless system might be measured in terms of subscribers served per megahertz of spectrum used per square kilometer of service area. NTIA officials cautioned, however, that many or most of the systems used by the federal government, including radars, navigation, military tactical, and scientific systems, do not fall within the scope of this type of measure of spectrum efficiency and that no effective measure for spectrum efficiency has been identified for these latter types of systems. Implementing more spectrum-efficient technologies at federal agencies can be challenging. For example, around 1990, NTIA began exploring the use of “narrowbanding” because of concerns over its ability to continue to meet federal agencies’ land mobile communications needs. Narrowbanding is a technique for reducing the amount of spectrum (bandwidth) needed to transmit a radio signal, thereby freeing up spectrum to meet future growth. In 1992, the Congress directed NTIA to adopt and implement a plan for federal agencies with existing mobile radio systems to use more spectrum- efficient technologies. With the approval of IRAC, NTIA required all federal agencies to upgrade their existing land-based mobile systems so as to reduce the bandwidth needed per channel from 25 kHz to 12.5 kHz. NTIA set deadlines for the narrowbanding requirement, which is to be completed in two stages by 2008. All federal agencies need to meet the narrowbanding requirement in order to prevent harmful interference. NTIA officials stated that any agency not meeting the narrowbanding requirements would be responsible for eliminating the harmful interference. NTIA officials also stated that no acceptable justifications for not adopting narrowbanding have been proposed or developed. Spectrum managers from the seven agencies we reviewed presented a mixed picture about their ability to meet this deadline. While some believed that they were on track, others stated that they were either having difficulty meeting the deadlines or would not meet the deadlines at all. The Chief Information Officer in one agency compared the requirement to an unfunded mandate; he said the agency had not been provided with the financial resources needed to make system design changes, buy new equipment, and maintain current equipment until the transition was finalized. He stated that his office could not compete with other agency priorities for funding. Officials at other agencies stated that shortages in qualified staff were affecting their ability to meet the narrowbanding deadlines. For example, they said additional staff are needed to design systems using the smaller amount of bandwidth and to find and request the needed frequencies. Finally, several officials stated that the commercial sector would be unable to provide them all the narrowbanding equipment and support needed to continue their operations even if the money was available. On June 26, 2002, NTIA requested that federal agencies provide the status of their compliance with the narrowbanding requirements. Another example of problems in implementing spectrum-efficient technologies involves a technique known as trunking. Trunking systems conserve spectrum by enabling users to share a common set of voice radio channels rather than have their own dedicated channels that may not be heavily used at all times. NTIA sponsored a pilot trunking program for federal agencies in the early 1990s that included six cities. According to NTIA, some agencies resisted the program because, although spectrum could be conserved, the agencies found that it was more costly to participate in trunking than it was to use their own channels. In addition, some agencies said the trunking systems did not meet their mission needs. In 1993, NTIA insisted that the contracted system be used unless a waiver had been approved for an economic and/or technical exemption. NTIA noted that the program has only been successful in Washington, D.C., where agency demand for frequency assignments, and therefore spectrum congestion, is extremely high. NTIA told us that the congressionally-mandated spectrum management fees agencies pay help promote spectrum efficiency by providing federal users with an incentive to return frequency assignments that they no longer need. These fees are designed to recover part of the costs of NTIA’s spectrum management function. The fees began in 1996 and amounted to about $50 per frequency assignment in 2001. NTIA decided to base the fee on the number of assignments authorized per agency instead of the amount of spectrum used per agency because the number of assignments better reflects the amount of work NTIA must do for each agency. Moreover, NTIA stated that this fee structure provides a wider distribution of cost to the agencies. For example, basing the fee on the amount of bandwidth used would have resulted in the Air Force paying the majority of the fees because of the large amount of spectrum used by the radar systems they operate. Although NTIA officials said that spectrum fees provide an incentive for agencies to relinquish assignments, it is not clear how much this promotes efficient use of spectrum. Officials from two agencies said that the financial costs were not high enough to cause them to decrease the number of frequency assignments they retained. Specifically, officials from one of the agencies said that the amount of money paid in spectrum fees was a small share of the money needed to operate a radio system. In addition, agencies may be able to reduce assignments without returning spectrum. For example, a spectrum manager for a federal agency said that the spectrum fee has caused the agency to reduce redundant assignments, but that it has not affected the efficiency of the agency’s spectrum use because the agency did not return any spectrum to NTIA as a result of reducing its assignments. Other countries are moving toward using payment mechanisms for government spectrum users that are specifically designed to encourage efficiency, rather than to recover the cost of managing the spectrum. Both Canada and the United Kingdom are reviewing their administrative fee structures at this time with the intent of encouraging spectrum efficiency. Our work on this issue is ongoing and will be addressed in our report that will be completed in early 2003. The divided structure of U.S. spectrum management, coupled with the increasing difficulty of accommodating new services and users, has heightened the importance of coordinated national spectrum planning. Although FCC and NTIA have recently taken steps to better coordinate spectrum management, it is unclear whether these steps will result in a national spectrum strategy. The absence of such a strategy may make it more difficult for FCC and NTIA to avoid contentious, protracted negotiations when providing for future spectrum requirements. Similarly, the United States’ ability to promote its strategic and economic interests at WRCs has become increasingly important and difficult as spectrum has grown more congested and countries vie for advantage in the multibillion dollar global telecommunications marketplace. The ongoing debate about the effectiveness of the United States’ preparatory process for WRCs has raised concerns that the U.S. delegation may not be in the best position to promote U.S. positions as effectively as possible. While the Department of State, FCC, and NTIA maintain that they have improved preparations for the 2003 WRC through better coordination, key issues remain unresolved, including the use of separate processes by FCC and NTIA to develop U.S. positions and the short tenure of the head of the delegation. Because of the large number of federal frequency assignments, NTIA’s processes for promoting the efficient use of federal spectrum are heavily dependent on the federal agencies that use the spectrum. However, some federal agencies are not conducting comprehensive reviews of their frequency assignments. Compounding this problem is NTIA’s discontinuation of two spectrum-monitoring programs that helped promote accountability by verifying that federal agencies were using their spectrum assignments as specified. Federal agencies and NTIA primarily attributed the lack of comprehensive reviews and the discontinuation of NTIA monitoring programs to staffing and resource issues. The result of these limitations is that the federal government does not have the information necessary to assure that federal agencies are using only as much spectrum as needed to fulfill their mission requirements. Moreover, even if additional resources became available to enable agencies to conduct reviews to determine how effectively they are using spectrum available to them, it is unclear if this alone could ensure the efficient use of hundreds of thousands of federal spectrum assignments. Other countries are moving toward using incentives such as payment mechanisms for government spectrum users to encourage conservation of spectrum. In follow-on work, we will be looking at the types of incentives that are being employed to encourage both government and nongovernment users to conserve spectrum. In order to improve U.S. spectrum management, we are making the following recommendations: The Secretary of Commerce and the Chairman of FCC should establish and carry out formal, joint planning activities to develop a clearly defined national spectrum strategy to guide domestic and international spectrum management decision making. The results of these planning activities should be reported to the appropriate congressional committees. Following the 2003 WRC, the Secretary of State, the Secretary of Commerce, and the Chairman of the Federal Communications Commission should jointly review the adequacy of the process used to develop and promote the U.S. position, including the separate processes used by FCC and NTIA, and the short tenure of the head of delegation, and prepare a report containing any needed recommendations for making improvements. The report should be provided to the appropriate congressional committees. To strengthen the management and accountability of the federal government’s use of spectrum, the Secretary of Commerce should direct NTIA, assisted by IRAC and the Office of Personnel Management, to analyze the human capital needs of federal agencies for spectrum management and develop a strategy for addressing any identified shortcomings. This analysis should be linked to near-term and long-term human capital issues that may be identified as part of the development of a national spectrum strategy. The Secretary of Commerce should develop a strategy for enhancing its oversight of federal agencies’ use of spectrum, such as revitalizing its former monitoring programs, and define the Department of Commerce’s human capital needs for carrying out this strategy. We provided a draft of this report to FCC, the Department of Commerce, and the Department of State for a review and comment. They were in general agreement with our recommendations. FCC said that both it and the Department of Commerce have initiated processes to review and improve spectrum management. FCC also said that it would be beneficial for the Department of State, Department of Commerce, and FCC to further review the U.S. preparatory process following the 2003 WRC. FCC also offered some technical comments that we incorporated into the report where appropriate. FCC’s written comments appear in appendix III. The Department of Commerce said it is time for the United States to take a broad look at the organizational structures and processes the United States has built both nationally and internationally to manage and plan spectrum use. The Department of Commerce also said that NTIA and FCC participate together in spectrum planning activities, as evidenced by NTIA’s Spectrum Summit in April 2002 and FCC’s spectrum policy workshops, but that spectrum planning and interagency coordination could be improved. With regard to WRCs, the Department of Commerce agreed that the Department of State, FCC, and NTIA should jointly review the adequacy of the preparation process following the 2003 WRC. The Department of Commerce also said that it would review its human capital needs and current resources in spectrum management and develop a strategy for addressing any shortcomings. The Department will also encourage other agencies that are members of IRAC to conduct a similar analysis. The Department also offered some technical comments that we incorporated into the report where appropriate. The Department of Commerce’s written comments appear in appendix IV. The Department of State said that it would consult with the Department of Commerce and FCC after the conclusion of the 2003 WRC, and it offered a technical comment that we incorporated into the report. The Department of State’s written comments appear in appendix V. We are sending copies of this report to the appropriate congressional committees. We are also sending this report to the Secretary of State, the Chairman of the Federal Communications Commission, and the Secretary of Commerce. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-2834 or at guerrerop@gao.gov. Individuals making key contributions to this report include Dennis Amari, Karin Bolwahnn, Keith Cunningham, John Finedore, Rahul Gupta, Peter Ruedel, Terri Russell, Tanya Tarr, Dr. Hai Tran, Mindi Weisenbloom, and Alwynne Wilbur. Different parts of the radiofrequency spectrum have different technical characteristics that make them better-suited for some types of communications than others. For example, the most technically suitable spectrum for mobile communications is below 3 gigahertz because this part of the spectrum provides the best match for spectrum propagation characteristics (such as distance, capacity, and reliability) required for mobile communications. The major parts and uses of the spectrum are as follows: The lower frequency waves (including very low frequency , low frequency , and medium frequency ) are located from 3 kilohertz (kHz) to 3 megahertz (MHz). They tend to travel along the ground and penetrate water and solid objects. Uses include submarine communication and AM radio. High frequency (HF) waves are located from 3 MHz to 30 MHz. They travel along the ground and into the sky where they are reflected back to earth by the ionosphere. By using this reflection to extend range, devices in the HF bands can transmit over long distances on relatively low power. Amateur Radio (Ham), Citizens Band Radio Service (CB), military tactical radio, and maritime communications are found in this frequency range. Very high frequency (VHF) waves are located from 30 MHz to 300 MHz. They follow the ground less and will pass through the ionosphere, which makes satellite communication possible. To operate in the VHF range, transmitters require less power but larger antennas relative to higher frequencies. Broadcast television, FM radio, federal government, public safety, and private mobile radio services are some of the applications that operate in this frequency range. Ultrahigh frequency (UHF) waves are located from 300 MHz to 3 gigahertz (GHz). The combination of smaller antenna and lower power requirements for device operation make this frequency range ideal for many wireless telecommunication applications. Broadcast television, first and second-generation mobile telephones, satellites (such as the global positioning system and commercial satellites), federal and nonfederal radio systems, and numerous military applications—like the Ballistic Missile Early Warning System—operate in UHF bands. Superhigh frequency (SHF) waves are located from 3 GHz to 30 GHz, and extremely high frequency (EHF) waves are located from 30 GHz to 300 GHz. These waves require more power to operate and are affected by rain and clouds, especially at the higher frequencies. Numerous military and commercial satellites, aeronautical radio altimeters, radars (such as Terminal Doppler Weather Radar), and fixed microwave links occupy these frequency bands. Some of the highest bands are allocated for certain uses but remain unused due to cost and technical constraints of using those frequencies. Numerous legislative, regulatory, legal, and policy decisions and actions have shaped the United States’ management and use of the radiofrequency spectrum. This appendix provides supplemental information and major milestones in the development of the divided structure for domestic spectrum management and on international conferences on global spectrum issues. Figures 6, 7, 8, and 9 throughout this appendix illustrate the interplay of wireless technological advances with key international and domestic policy events. Radio Signal Transmission—Guglielmo Marconi became the first person to succeed in sending a message in telegraphic code over a distance of 1 1/4 miles using electricity without wires. Ships at Sea—Radio’s most important initial use was at sea where it reduced the isolation of ships during emergencies. By 1904, according to a report of the President’s Board on Wireless Telegraphy, there were 24 radio- equipped naval ships and 10 more planned; 20 naval coastal stations had been established, and equipment for 10 more had been ordered; 6 stations were operated by the U.S. Army; 2 stations were operated by the Weather Bureau; 5 private companies were operating coastal stations (one serving the Pacific coast); and a total of 200 additional stations on shore or at sea had been planned. First International Conference—The First International Radio Telegraphic Conference was held in Berlin, Germany, with the governments of Austria, France, Germany, Great Britain, Hungary, Italy, Russia, Spain, and the United States represented. The conference drafted a protocol to address the exchange of messages from coastal stations with ships regardless of the system of radiotelegraphy used. The protocol served as the basis for the first agreement on the use of radiotelegraphy, which occurred in 1906. Roosevelt’s Interdepartmental Board—At the recommendation of the Secretary of Navy, President Theodore Roosevelt appointed an Interdepartmental Board of Wireless Telegraphy to consider “the entire position of wireless telegraphy in the service of the National Government.” Among matters addressed by the Board were the control of interference between radiotelegraph stations in general and nonduplication of coastal stations by government departments. The Board recommended that all government coastal radio facilities be placed under control of the Navy, and that all private stations be licensed by the Department of Commerce and Labor. First International Convention—A second International Radiotelegraphy Conference was convened in Berlin, Germany, with 28 countries represented. The conference adopted a convention that followed closely the protocol of the first conference. The main provisions of the convention were: requiring that messages by all coastal stations and ships be accepted regardless of the system used; establishing priority for distress calls from ships; and creating a bureau to gather and distribute information about the radiotelegraphy systems and coastal station installations in each country. The convention also addressed tariffs for international radio communications and regulations prescribing specific wavelengths from which commercial entities were excluded. Technical and operational standards for radio communications in the form of “Service Regulations” were included in an appendix. A precursor to the International Table of Allocations, the regulations distinguished two service categories (1) “general public service” with an exclusive allocation of the 187-500 kHz band; and (2) “long-range or other services” which could be assigned to other frequencies. Wireless Ship Act—The first instance of U.S. government regulation of radio technology and services, this act required any U.S. or foreign oceangoing ship with 50 or more passengers to be equipped with an operator of and an apparatus for radio communications equipment. The Department of Commerce and Labor was designated to provide for its execution. Wireless Ship Act Amended—Three months after the sinking of the Titanic, Congress quickly passed amendments to the Wireless Ship Act of 1910. Among the amendments to the law were requirements that ships carry an auxiliary power supply capable of enabling radio apparatus to be operated continuously for at least 4 hours at a minimum range of 100 miles, day or night; that ships carry two or more persons skilled in the use of radio apparatus; and that ships traversing the Great Lakes comply with provisions of the Act. Radio Act of 1912—The Radio Act of 1912 was the first domestic statute that addressed spectrum allocation. It was enacted, in part, to comply with obligations under the international convention of 1906. The Act required every operator of radio to obtain a license from the Secretary of Commerce and Labor. (When the Department of Labor was separately established in 1913, these powers were retained by the Department of Commerce.) Any person that operated any apparatus for radio communication without a license was guilty of a misdemeanor, and the offending apparatus was subject to forfeiture. Licenses were subject to detailed regulations contained in the Act itself, with certain additional and supplementary regulations promulgated by the Secretary of Commerce. The Act also provided for the protection of federal government radio operations and gave the President special authority over radio communications in emergencies. Third International Conference—Although the United States was a signatory to the 1906 international convention, the U.S. Senate did not ratify the treaty until after its adhering members withdrew an invitation to the United States to attend the third international conference scheduled for June 1912 in London, England. Soon thereafter, and only 2 months before the start of the conference, the Senate ratified the 1906 convention resulting in a renewed invitation to the United States to attend the London conference. In light of the sinking of the Titanic earlier that year, the use of radiotelegraphy for safety of ships at sea dominated this conference. The resulting convention was ratified in the United States by the Senate in 1913. Legislation on Radio Operations Considered by Congress—In the late 1910s, legislation was considered by Congress to maintain government control of all radio stations and prohibit the construction of any new commercial stations. An alternative to government control was proposed— the establishment of a privately-controlled company operating as a government-authorized monopoly. These proposals were advocated in response to Great Britain’s dominance in wireline communications and the pursuit of dominance by British nationals in radio communications. While neither proposal was adopted in the United States, in 1920 Congress did act on a recommendation of the Navy to authorize the use of naval stations for a temporary 2-year period for the transmission and reception of private commercial messages at locations that lacked adequate commercial facilities. This authority was extended again in 1922 and 1925 and, ultimately, made permanent by an act of Congress in 1927. Devising a New International Union—Representatives of the Allied nations of World War I met in Washington, D.C., to create a new international union and simplify communications by bringing all methods of electrical transmission, as far as practicable, under the same rules. A convention and regulations were drafted setting forth basic international institutional features for telecommunications. Although a consensus was not reached, provisions of these documents were used at the next international radiotelegraph conference held in 1927 and, ultimately, served as the basic structure of the International Telecommunication Union (ITU), which was established in 1932. Introduction of Commercial Radio—Westinghouse, one of the leading radio manufacturers, devised a means of selling more radios by offering radio programming. Dr. Frank Conrad, who had played records over the airwaves for his friends, was asked by Westinghouse to establish a station in Pittsburgh, Pennsylvania, that would regularly transmit programming. The Department of Commerce licensed the station to operate on 833.3 kHz and awarded it the call letters KDKA. On the night of November 2, 1920, KDKA made what is claimed to be the nation’s first commercial radio broadcast. The commercial radio business grew quickly; within 4 years, there were nearly 600 commercial radio stations licensed in the United States. Public Safety Use of Land Mobile Radio—Among the first known experimental uses of land mobile radio was by the Police Department of Detroit, Michigan, for emergency dispatch in patrol cars. The Detroit Police Department implemented a police dispatch system using a frequency band near 2 MHz. This service proved to be so successful that the channels allocated in the band were soon used to their full capacity. Police and emergency services’ communications needs are said to have been critical to the development of mobile radio telephone services. First National Annual Radio Conference—Because radio interference had become so chaotic, with the rise of radio broadcasting and the limitations of the Radio Act of 1912, Secretary of Commerce Herbert Hoover convened a conference of manufacturers, broadcasters, amateur radio representatives, and civilian and military government radio communications personnel to study the problem and make recommendations to alleviate the overcrowding of the radio waves. Three subsequent conferences were held in each of the following years. Legislation was introduced to implement various recommendations of the national radio conferences throughout this period. There was disagreement as to whether the Secretary of Commerce or a new commission should be given regulatory authority over spectrum use. However, it was not until 1927 that a compromise was reached on a framework for the management of radiofrequency spectrum by the federal government. Formation of IRAC—To enable the most effective use of spectrum by government, the Interdepartment Advisory Committee on Governmental Radio Broadcasting (later renamed the Interdepartment Radio Advisory Committee, or IRAC) was formed. The 1922 national radio conference awakened several of the federal government departments to the need for cooperative action in solving problems arising from the federal government’s interest in radio use. Secretary Hoover invited interested government departments to designate representatives for a special government radio committee. When they met, the committee recommended that a permanent interdepartment committee be formed. The committee agreed that its scope should extend beyond broadcasting and should be advisory to the Secretary of Commerce in all matters of government radio regulation. Legal Decisions on the Secretary’s Powers under the 1912 Act— Several key court decisions and opinions of the Attorney General regarding the power of the Secretary of Commerce were made following enactment of the Radio Act of 1912. For example: In 1912, the Attorney General stated in an opinion to the Secretary of Commerce and Labor that the Secretary did not have discretion in the matter of granting or refusing radio licenses and was not given general regulative powers under the Radio Act of 1912. In Hoover v. Intercity Radio Co., Inc., 286 F. 1003 (D.C. Cir., 1923), the Secretary of Commerce was denied authority to use his discretion to refuse a radio license on the grounds that he “had been unable to ascertain a wave length for use by Plaintiff, which would not interfere with government and private stations.” The court pointed out that the Radio Act of 1912 necessarily contemplated interference between stations, that the Secretary had no discretion to refuse the license, and that the issuance of licenses was a ministerial act. The court held in U.S. v. Zenith Radio Corporation, 12 F.2d 614 (N.D.Ill., 1926), that the Secretary of Commerce had no power to make regulations additional to those found in the Radio Act and that it was, at best, ambiguous on whether the Secretary could impose a limitation on the hours of operation of a radio licensee. In Carmichael v. Anderson, 14 F.2d 166 (W.D.Mo. 1926), the court held that while the Secretary of Commerce had the right to grant licenses with restrictions agreed upon by multiple applicants—such as time sharing by two radio operators using the same frequency—the Secretary may have no right to impose restrictions other than those contained in the Radio Act of 1912. In the case Tribune Co. v. Oak Leaves Broadcasting Station, Inc., (Cir. Ct., Cook County, Ill. 1926) reprinted in 68 Cong. Rec. 216–219 (1926), the court held that the novelty of broadcasting did not prevent an established station from asserting a right to be free from interference and the destruction of its operations by a newcomer. In the court’s view, the “priority of time”—obtaining a license first—created a superior right. In 1926, the Acting Attorney General issued an opinion stating that a broadcasting station could not operate under the Act without a license, but the Secretary had no discretion to refuse a license upon a proper application. Moreover, the Secretary had no power to designate the frequency within the broadcast band at which a broadcasting station might operate, nor to prescribe the hours of operation, to limit the power of stations, or to issue licenses of limited duration. Radio Act of 1927—The Radio Act of 1912 proved to be totally inadequate in coping with the spectrum of the rapidly growing radio broadcasting industry. Further, Congress had become concerned with other issues related to spectrum use, such as vested rights in the spectrum, the basis or criteria for granting licenses, and the potential monopoly in radio equipment manufacturing. Five years in the making, the Radio Act of 1927 was enacted with two key provisions: the creation of a new government commission to manage nongovernment spectrum use, and the adoption of the “public interest, convenience, and necessity” standard for licensing. Concerns about placing all regulatory authority for radio licensing in one individual, such as the Secretary of Commerce, led to the adoption of a compromise—the creation of the Federal Radio Commission (FRC), a five- member independent regulatory agency with licensing authority for nongovernment stations for a period of one year. After 1 year, as originally enacted, licensing authority would revert back to the Secretary of Commerce and the FRC would serve as an appellate body. Among the responsibilities assigned to the FRC were the following: issuing station licenses, classifying radio stations, assigning frequencies, describing types of service, preventing interference, establishing power and location of transmitters, and establishing coverage areas. The Act reserved to the President authority over all government radio stations. The “public interest, convenience, and necessity” standard was not defined in the Act. First International Table of Frequencies—Representatives from nations around the world met in Washington, D.C. for the third international radiotelegraphy conference, agreeing to many of the proposals discussed at the 1920 Washington meeting. The conference agreed to a request made at the 1925 Paris Telegraph Conference to consider the unification of the radiotelegraph and telegraph conventions into a single international instrument. In addition, the conference resulted in agreement on the first International Table of Frequency Allocations. The following services were given exclusive or shared use of various frequency bands between 10 kHz and 40 MHz: (1) fixed, (2) mobile, (3) maritime mobile, (4) broadcasting radio beacon, (5) air mobile, and (6) amateur. The conference also created the International Radio Consultative Committee for purposes of studying technical and related radio communications questions. International Telecommunication Union Formed—Unification of the international radiotelegraph and telegraph conventions was accomplished in Madrid, Spain, thus forming a single international treaty for both wireline and wireless communications, and a single international treaty organization known as the ITU. The use of radio for both aeronautical mobile communications and broadcasting had increased substantially in the late 1920s, and allocations had to be identified for them in the frequency allocation table. Because of the nature of propagation characteristics of the contested frequencies, low and medium bands were divided into a European region and “other regions.” Enactment of the Communications Act of 1934—At the request of President Franklin Roosevelt, an interdepartmental committee was established in 1933 by the Secretary of Commerce to study the problem of how to regulate communications. Reporting to the President the following year, the Committee recommended that all regulation over communications—both radio and common carrier—be vested in a single agency. With the committee report, President Roosevelt sent a letter to Congress recommending the creation of the Federal Communications Commission (FCC), transferring authorities of the Federal Radio Commission and (as pertaining to communications) the Interstate Commerce Commission, affecting services that “rely on wires, cables, or radio as a medium of transmission.” Legislation embodying the recommendation was passed by Congress and signed into law by President Roosevelt on June 19, 1934. Title III of the Act, governing the provision of radio services, is intended to “maintain control…over all the channels of radio transmission,” and provide for the use—but not ownership—of channels of the radio- frequency spectrum through licenses of limited duration. Among the key authorities granted to FCC in Title III of the Act are to: make reasonable regulations governing the interference potential of radio-frequency emitting devices; classify radio stations; prescribe the nature of services in each class of licensed stations; assign frequency bands to various classes of stations and assign frequencies for each individual station; make regulations to prevent interference between stations; study new uses of radio and provide for experimental use of frequencies; and suspend licenses for violations of the Act. Title III also includes provisions addressing broadcasting. Like the Radio Act of 1927, the Communications Act of 1934 required the commission to use the “public interest, convenience, and necessity” standard for granting licenses. In order to satisfy the standard, FCC was authorized to grant applications and make “such distribution of licenses, frequencies, hours of operation, and of power among the several States and communities as to provide a fair, efficient, and equitable distribution of radio service to each of the same.” Defense Communications Board Formed—President Roosevelt issued an executive order creating the Defense Communications Board (renamed the Board of War Communications) to coordinate the relationship of all branches of communication to the national defense. The Board was composed of: the Chairman of FCC, who served as Chairman; the Chief Signal Officer of the U.S. Army; the Director of Naval Communications; the Assistant Secretary of State, Division of International Communications; and the Assistant Secretary of the Treasury, Coast Guard. During a war involving the United States, IRAC was to serve as a committee of the board in an advisory capacity. IRAC-FCC Agree to Interference Notice—IRAC and FCC agreed to cooperate in giving each other notice of all proposed actions that would tend to cause interference to radio stations managed by the other. Three Regions Formed for International Allocations—At the first post-World War II international radio conference held in Atlantic City, New Jersey, extensive changes were made to the International Table of Frequency Allocations reflecting the advances in radio technology, such as radar and similar radio-determination systems, made during World War II. In addition, new services contending for allocations produced further fragmentation of the table and a new arrangement for spectrum allocations. The new arrangement subdivided the world into three regions—Europe, U.S.S.R., and Africa in region 1; North and South America comprising region 2; and Asia, Australia, and Oceania in region 3. Communications Policy Board Established—By executive order issued by President Truman, the President’s Communications Policy Board was established to study and make recommendations on the policies and practices that should be followed by the federal government in the field of telecommunications to meet the broad requirements of the public interest. The decision to appoint the Board stemmed in part from the inability of existing organizations to resolve competing requirements of FCC on behalf of nongovernment users and government agencies for the use of high frequencies. In a report to the President, the Board recommended that either a single adviser, or a three-person board, carry out the following duties: plan and execute the authority of the President to assign frequencies to exercise control over the nation’s telecommunications facilities during a national emergency or war; stimulate and correlate the formulation of plans and policies to ensure maximum contribution of telecommunications to the national interest and maximum effectiveness of U.S. participation in international negotiations; stimulate research on problems in telecommunications; establish and monitor a system of initial justification and continued use of frequencies by government agencies; and supervise, in cooperation with FCC, the division of spectrum space between federal government and nonfederal government users. President Truman Appoints Telecommunications Adviser— Approving a recommendation of the President’s Communications Policy Board, President Truman issued an executive order establishing the Telecommunications Adviser within the Executive Office of the President to carry out the duties prescribed by the Board. IRAC Reorganizes and FCC’s Role Becomes Liaison—IRAC was reconstituted with a permanent Chairman designated by the Telecommunications Adviser to the President and was charged with the additional responsibilities of formulating and recommending policies, plans, and actions in connection with the management and usage of radio frequencies by the U.S. government. FCC withdrew as a regular member of IRAC and in lieu thereof designated a liaison representative to work jointly with IRAC in the solution of mutual problems. Position of Adviser to the President Abolished—President Eisenhower accepted the resignation of the Telecommunications Adviser to the President and issued an executive order abolishing the position and transferring the functions to the Director of the Office of Defense Mobilization. IRAC Establishes Assignment Principles—IRAC established principles for the assignment and use of radio frequencies by government agencies, including assurances that requests are justified and assignments are used by the agencies and not stored for future use. International Allocation for Satellite Service Adopted—At the World Administrative Radio Conference, held in Geneva, Switzerland, the assembled nations revised the International Table of Frequency Allocations to accommodate use of higher radio frequencies. A brand new radio service was defined that would eventually bring about a new era of international conferences and issues—the satellite radiocommunication service. The next international radiocommunication conference would not be held for another 20 years. Communications Act Amendments of 1960—Congress added new sections to the Communications Act of 1934 addressing comparative hearings held by FCC to determine licensing. The new sections were added following the decision in U.S. v. Storer Broadcasting, 351 U.S. 192 (1955). In Storer, the Court held that a hearing is not required under Sec. 309 of the Act in cases where undisputed facts show that the granting of an application would contravene the Commission’s perception of the “public interest.” In the opinion of the court, Congress did not likely intend FCC to “waste time on applications that do not state a valid basis for a hearing.” The Act was revised to provide FCC with broad discretion to avoid hearings on petitions to deny a license application unless a substantial and material question of fact is presented. Communications Satellite Act of 1962—This act provided for U.S. participation in a global commercial communications satellite system by the Communications Satellite Corporation under government regulation. The principal task of the corporation was to plan, establish, and operate the system in cooperation with other nations to furnish, for hire, satellite relay of international and interstate telephone and telegraph services, including television. The U.S. portion of the system was subject to the same regulatory controls by FCC as were other communications common carriers. Director of Telecommunications Management Position Established—President Kennedy issued an executive order establishing the position of Director of Telecommunications Management. The authority of the President to assign, amend, modify or revoke frequency assignments to government agencies was delegated to the Director. IRAC Approves Spectrum Management Manual—IRAC approved, as a working document, a draft “Manual of Regulations and Procedures for Frequency Management.” After approval by the Director of Telecommunications Management, copies were distributed to all government users of radio, and it became the guideline for daily use. Report on Telecommunications Science and the Federal Government Released—The report, Electromagnetic Spectrum Utilization—The Silent Crisis, prepared by the Telecommunication Science Panel of the Commerce Technical Advisory Board, Department of Commerce, suggested the appearance of a strong basis for the separate management of government and nongovernment radio spectrum use. The separation is rooted mainly in the direct responsibility of the President for national defense, the report states, and the missions of the federal agencies; whereas the administration of nongovernment telecommunications in the national interest requires processes that provide adequate public representation of economic and political forces. Periodic Review of Government Assignments—IRAC approved a policy for the periodic review of government frequency assignments on a 5- year cycle. The procedure would serve to eliminate unused assignments, update remaining assignments, and make the master file of government assignments much more useful in engineering new assignments. President’s Task Force on Communications Policy Issues Report— Neither the President nor any executive branch agency had access to “a source of coordinated and comprehensive policy advice,” concluded the President’s Task Force on Communications Policy in its report to President Johnson. As a result, the executive branch had difficulty presenting a coherent and consistent position on problems. To address these problems, the Task Force recommended the establishment of an executive agency to pursue long-term strategy and coordination, to formulate policy, and to serve other executive departments and agencies as a resource center for communications expertise. Office of Telecommunications Policy Created—Congress approved a plan proposed by President Nixon to transfer various telecommunications and functions of the President to a new Office of Telecommunications Policy. The new office would be responsible for developing plans, policies, and programs with respect to telecommunications that will promote the public interest; support national security; sustain and contribute to the full development of the economy and world trade; strengthen the position and serve the best interests of the United States in negotiating with foreign nations; and promote the effective and innovative use of telecommunications technology, resources, and services. In addition, the President delegated to the new office his authority over assignments to federal radio stations and directed the Secretary of Commerce to support the new office’s spectrum management responsibilities with analysis, engineering, and administrative assistance. NTIA Formed—President Carter issued an executive order to abolish the Office of Telecommunications Policy and establish an Assistant Secretary for Communications and Information, transferring the functions of the Office of Telecommunications Policy to the Department of Commerce. A departmental order was issued shortly thereafter forming the National Telecommunications and Information Administration (NTIA). First World Radio Conference in 20 Years—The first general World Administrative Radio Conference (WARC) held in 20 years was convened for 10 weeks in Geneva, Switzerland. The most significant results of WARC 1979 included revisions to many technical and operational standards for radio, particularly the International Table of Frequency Allocations, and the scheduling of a series of specialized conferences for the next decade. The table of allocations was expanded upward and modifications were made in various frequency bands to reflect increased use of satellite radiocommunications. FCC Establishes Cellular Duopoly—FCC concluded that the public interest would be best served with two competing cellular systems in each geographic area. Each geographic market was divided in such a way as to allow the local exchange service (typically, one for the Bell Operating Companies) and a nonwireline applicant to provide service. AT&T Divestiture Consent Decree—AT&T and the Department of Justice entered into a consent decree that required divestiture of the local Bell Operating Companies (BOCs) from AT&T. In addition, the decree required that the BOCs provide equal access to long distance and information service providers to their networks for interconnection, and it prohibited the BOCs from providing long distance service, information services, and telecommunications equipment manufacturing. The BOCs retained their mobile services subsidiaries in 1984 after divestiture. Congress Authorizes Department of State Communications Policy Office—Congress passed the Department of State Authorization Act for Fiscal Years 1984 and 1985, codifying into law and providing for the presidential appointment of a Coordinator for International Communications and Information Policy within the U.S. Department of State. The position had been established by the Department of State 2 years earlier and had made the incumbent responsible to the Undersecretary of State for Security Assistance, Science, and Technology. The Coordinator acquired a rank equivalent to an Assistant Secretary of State and the personal rank of Ambassador in 1983 and became head of a new Bureau of International Communications and Information Policy in 1985. In 1994, the bureau was incorporated into the Bureau of Economic and Business Affairs, and legislation was passed that no longer required presidential appointment of the Coordinator position, reassigning it to the Bureau of Economic and Business Affairs. NTIA Created Office of International Affairs—Primary responsibility for international telecommunications, which had been handled within NTIA by the Office of Spectrum Management, was transferred to the newly created Office of International Affairs. Communications and Information Policy Bureau at Department of State—The Coordinator for Communications and Information Policy, Department of State, became the head of a new bureau—the Bureau of International Communications and Information Policy. FCC International Office Established—FCC created the Office of International Communications to coordinate international activities and policy development for spectrum and other telecommunications matters. This action was taken, in part, to prepare for the World Administrative Radio Conference in 1992 and to establish a focal point at FCC for international matters. NTIA Organization Act Passed—Fourteen years after NTIA was formed, Congress enacted the Telecommunications Authorization Act of 1992, codifying into law the existence and authority of NTIA as an executive branch agency principally responsible for advising the President on telecommunications and information policies. Two-Year Intervals Established for WRCs—Delegates to the 1992 ITU Plenipotentiary Conference, held in Geneva, Switzerland, adopted a resolution to convene World Radiocommunications Conferences (WRCs) every 2 years. Competitive Biding for Spectrum Licenses Authorized by Law—Title VI of the Omnibus Budget Reconciliation Act of 1993 included several provisions addressing spectrum management as follows: The Act amended the National Telecommunications and Information Administration Organization Act to direct NTIA to identify and recommend the reallocation of a minimum of 200 MHz of spectrum used by the federal government to nonfederal government users. The Communications Act of 1934 was amended to authorize the use of competitive bidding (auctions) by FCC for certain spectrum licenses. FCC was also authorized to make available frequencies reallocated from federal to nonfederal government use. The Act amended the Communications Act of 1934 to specify that all mobile radio service providers (public and private) be treated under a comprehensive and consistent regulatory framework. The Act created the new statutory category of commercial (CMRS) and private (PMRS) mobile radio services. As provided earlier, the statute requires all CMRS providers to be treated as common carriers, whereas PMRS providers are exempt from common carrier regulation. However, the new provisions expressly preempted the states from entry or rate regulation of both CMRS and PMRS; authorized FCC to forbear from regulating CMRS where it deemed regulation unnecessary to ensure just, reasonable, and nondiscriminatory practices; and granted wireless carriers new rights to interconnect with wireline carriers. FCC International Bureau Created—FCC established an International Bureau to consolidate FCC’s various international activities. This change was made to reflect the increasingly global nature of the communications marketplace as well as the concern that international communications policy needed to be better coordinated within FCC, with industry, with other government agencies, and with other countries. Public Safety Spectrum Report Issued—By congressional directive, FCC and NTIA established a Public Safety Wireless Advisory Committee in 1995 to provide advice and recommendations on specific wireless communications requirements of public safety agencies through 2010. In the final report issued in September 1996, the Advisory Committee concluded that additional public safety spectrum was needed, that spectrum must be used more efficiently, and interoperability standards must be established to meet current and future needs of public safety users. In addition, the committee proposed immediate identification of 2.5 MHz of spectrum for interoperability from new or existing allocations; allocation in the short term of 25 MHz for public safety purposes, up to an additional 70 MHz to support increased use of data, imagery, and video by the year 2010, and the use of unused spectrum in the 746-806 MHz band (television channels 60-69), as well as TV channels below 512 MHz; more flexible licensing policies to encourage the use of spectrally efficient approaches while remaining technologically neutral; more sharing and joint use of spectrum and policies to streamline cooperative use of federal and nonfederal spectrum; the use of commercial services for public safety provided that essential requirements of coverage, priority access and system restoration, security, and reliability are met; a continuing consultative process to permit the public safety community, FCC, and NTIA to adjust to new requirements and opportunities; and identification of alternative methods of funding future public safety communications systems. The Telecommunications Act of 1996—The Telecommunications Act was intended to “provide for a pro-competitive, deregulatory national policy framework designed to accelerate rapidly private sector deployment of advanced telecommunications and information technologies and services to all Americans by opening all telecommunications markets to competition.” NTIA Authorized to Collect Fees from Government Agencies— Included in a provision for additional fiscal year 1996 funding for NTIA, the Secretary of Commerce was authorized to charge fees to federal agencies for spectrum management, analysis, operations, and related services, and to retain and use as offsetting collections funds transferred for all costs incurred in telecommunications research, engineering, and related activities by the Institute for Telecommunication Sciences of NTIA. Congress Passes Balanced Budget Act of 1997—The Balanced Budget Act of 1997 amended FCC’s spectrum auction authority by requiring that FCC award mutually exclusive applications for initial licenses using competitive bidding procedures (not including licenses for public safety radio, digital television, and existing terrestrial broadcast licenses). Among the various other provisions in the Act addressing spectrum, NTIA was directed to reallocate another 20 MHz below 3 GHz for commercial uses, and the Act authorized private parties that win spectrum licenses encumbered by federal entities to reimburse the federal entities for the costs of relocation if the private parties seek to expedite the spectrum transfer. Defense Authorization Act Revises Spectrum Relocation Reimbursement Policy—Under the Strom Thurmond National Defense Authorization Act, any government entity using this spectrum band that proposes to relocate is directed to notify NTIA of the marginal costs anticipated to be incurred in relocation or modification necessary to accommodate prospective nongovernment licensees. NTIA is directed to notify FCC of such costs before an auction of the spectrum, and FCC must notify potential bidders prior to the auction of the estimated relocation or modification costs based on the geographic area covered by the proposed licenses. Any new licensee benefiting from a government station relocation must compensate the government entity in advance for relocation or modification costs. FCC Issues Principles for Spectrum Reallocation to Encourage Development of Telecommunications Technologies—FCC issued a policy statement setting forth guiding principles for the Commission’s future spectrum management activities. The principles are designed to respond to increasing demand for spectrum, promote competition, and encourage the development of emerging telecommunications technologies. The principles are to serve as a guidepost for the reallocation of approximately 200 MHz of spectrum to enable a broad range of new radio communication services, such as expanded wireless services, advanced mobile services, new spectrum-efficient private land mobile systems, and medical telemetry systems. Spectrum Planning Directive in Defense Authorization Act—The National Defense Authorization Act for Fiscal Year 2000, contained the following requirements addressing spectrum management: The Secretary of Commerce, acting through the Assistant Secretary and in coordination with the Chairman of FCC, was directed to convene an interagency review and assessment of (1) the progress made in implementation of national spectrum planning; (2) the reallocation of federal government spectrum to nonfederal use, and (3) the implications for such reallocations to the affected federal executive agencies. The Secretary of Commerce, in coordination with the heads of the affected federal agencies and the Chairman of FCC, was directed to submit a report to the President; the Senate Committee on Armed Services; the Senate Committee on Commerce, Science, and Transportation; the House Committee on Armed Services; the House Committee on Energy and Commerce; and the House Committee on Science providing the results of the review and assessment not later than October 1, 2000. If, in order to make available for other use a band of frequencies of which it is a primary user, the Department of Defense was required to surrender use of such band of frequencies only after (1) NTIA, in consultation with FCC, identifies and makes available an alternative band or bands of frequencies as a replacement for the band to be surrendered; and (2) the Secretary of Commerce, the Secretary of Defense, and the Chairman of the Joint Chiefs of Staff jointly certify to the House Committees on Armed Services and Commerce that such an alternative band provides comparable technical characteristics to restore essential military capability that will be lost as a result of the surrendered bands. Eight MHz, previously designated for transfer from federal to nonfederal use, was reclaimed for exclusive federal government use on a primary basis by the Department of Defense. NTIA issued a report, Assessment of Electromagnetic Spectrum Reallocation, in response to these provisions in January 2001. Federal Long-Range Spectrum Plan Issued by NTIA—NTIA issued a report providing for long-range planning of radiofrequency spectrum use by the federal government. The report states that the national objectives for the use of the radio spectrum are to make effective, efficient, and prudent use of the spectrum in the best interest of the nation, with care to conserve it for uses where other means of communication are not available or feasible. The report also states that the government shall, in general, encourage the development and regulate the use of radio and wire communications subject to its control so as to meet the needs of national security; safety of life and property; international relations; and the business, social, educational, and political life of the nation. 3G Allocations Dominate WRC-2000—At the 2000 World Radiocommunication Conference (WRC-2000), spectrum and regulatory issues related to advanced mobile communications, including third- generation services, were discussed and three bands identified for its use (806-960 MHz, 1710-1885 MHz, and 2500-2690 MHz). The United States agreed that it would study these bands domestically, but did not commit to providing additional spectrum for third-generation systems. Congress Passes the ORBIT Act—The Open-market Reorganization for the Betterment of International Telecommunications (the “ORBIT” Act) became law in March 2000 to promote a “fully competitive global market for satellite communication services for the benefit of consumers and providers of satellite services and equipment.” The Act prohibits FCC from assigning orbital locations or spectrum licenses to international or global satellite communications services through the use of auctions. Further, the Act directs the President to oppose the use of auctions of satellite spectrum bands in international forums. Executive Memorandum Issued on Advanced Mobile Communications Systems—President Clinton issued a memorandum stating the need to select radio frequency spectrum for future mobile, voice, high-speed data, and Internet-accessible wireless capacity. The memorandum established the guiding principles for executive agencies to use in selecting spectrum that could be made available for third-generation (3G) wireless systems and strongly encouraged independent federal agencies to follow the same principle in any actions they take related to the development of 3G systems. The memorandum directed the Secretary of Commerce to work cooperatively with FCC (1) to develop a plan to select spectrum for 3G systems by October 20, 2000, and (2) to issue by November 15, 2000, an interim report on the current spectrum use and potential for reallocation or sharing of the bands identified at the WRC-2000 that could be used for 3G systems. These actions were seen as enabling FCC to identify spectrum for 3G systems by July 2001 and auction licenses by September 2002. Interference Avoidance for Defense and Public Safety Users—In the National Defense Authorization Act for Fiscal Year 2001, the Secretary of Defense, in consultation with the Attorney General and the Secretary of Commerce, was directed to conduct an engineering study to identify (1) any portion of the 138-144 MHz band that the Department of Defense can share, in various geographic regions, with public safety radio services; (2) any measures required to prevent harmful interference between Department of Defense systems and the public safety systems proposed for operation on those frequencies; and (3) a reasonable schedule for implementation of such sharing of frequencies. The Secretary of Commerce and the Chairman of FCC were to jointly submit a report to Congress on alternative frequencies available for use by public safety systems by January 1, 2002. NTIA issued a report, Alternative Frequencies For Use by Public Safety Systems, in December 2001, and a companion report was issued by FCC. Domestic Developments on Spectrum for 3G Systems—FCC issued a final report on the use of the 2500-2690 MHz band for advanced mobile communications systems, including 3G systems. NTIA also issued a final report on the 1710-1755 MHz federal government band and the 1755-1850 MHz band. FCC Chairman Michael Powell and Secretary of Commerce Donald Evans exchanged letters in which they agreed to postpone the July 2001 deadline for FCC to identify spectrum for 3G systems. Secretary Evans informed Chairman Powell that he had directed the then-Acting Administrator of NTIA to work with FCC to develop a new plan for the selection of 3G spectrum to be executed as quickly as possible. NTIA Hosts Two-Day Spectrum Summit—NTIA hosted a summit in Washington, D.C., on April 4-5, 2002, to help identify the best solutions to challenges posed by management of the nation's airwaves. The purpose of the spectrum summit was to explore new ideas to develop and implement spectrum policy and management approaches that will make more efficient use of the spectrum; provide spectrum for new technologies; and improve the effectiveness of domestic and international spectrum management processes. The first day featured industry and government spectrum users, economists, analysts, and technologists; the second day was devoted to working sessions focused on commercial, international, and federal government perspectives. FCC Chairman Forms Spectrum Policy Task Force—The formation of a Spectrum Policy Task Force was announced by FCC Chairman for purposes of assisting the Commission in identifying and evaluating changes in spectrum policy that will increase the public benefits derived from the use of radio spectrum. Composed of senior staff from various offices and bureaus of FCC, the Spectrum Policy Task Force issued a public notice on June 6, 2002, soliciting comment on various aspects of spectrum policy, including: market-oriented allocation and assignment policies, interference protection, spectral efficiency, public safety communications, and international issues. In August 2002, the Spectrum Policy Task Force held four public workshops in order to provide additional public input to the Task Force’s review. The topics included experimental licenses and unlicensed spectrum, interference protection, spectrum efficiency, and spectrum rights and responsibilities. Participants in these workshops included representatives from academia, industry, and government. The Task Force is tentatively scheduled to issue a report to the Commission by October 2002. Study on Viability of Accommodating 3G Systems Concluded—NTIA released findings of an assessment performed by NTIA, FCC’s 3G Working Group, the Department of Defense, and other members of the Intra- Government 3G Planning Group on the viability of accommodating advanced mobile wireless (3G) systems in the 1710-1770 MHz and 2110- 2170 MHz bands. The study concluded that 90 MHz of this spectrum can be allocated for 3G services to meet increasing demand for new services without disrupting communications systems critical to national security. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The radiofrequency spectrum is the medium that enables wireless communications of all kinds, such as mobile phone and paging services, radio and television broadcasting, radar, and satellite-based services. As new spectrum-dependent technologies are developed and deployed, the demand for this limited resource has escalated among both government and private sector users. Meeting these needs domestically is the responsibility of the Department of Commerce's National Telecommunications and Information Administration (NTIA) for federal government users and the Federal Communications Commission (FCC) for all other users. The current legal framework for domestic spectrum management evolved as a compromise over the questions of who should determine how spectrum is allocated among competing users and what standard should be applied in making this determination. Current methods for allocating spectrum face difficulties, and FCC and NTIA's efforts are not guided by a national spectrum strategy. Since nearly all of the usable radio spectrum has been allocated already, accommodating more services and users generally involves redefining current radiofrequency allocations. One method used by FCC and NTIA is to increase the amount of spectrum that is designated for shared use, so that additional types of services or users may be placed within a particular frequency allocation. Another method, called band-clearing, involves relocating a service or user from one area spectrum to another in order to make room for a new service or user. The challenges the United States faces in preparing for World Radiocommunication Conferences, where decisions are made regarding the global and regional allocation of spectrum, have raised questions about the adequacy of the United States' current preparatory process. Under the current structure, FCC and NTIA develop positions on agenda items through separate processes that involve the users of the spectrum they manage. NTIA has several oversight activities to encourage accountability and efficient use of the spectrum by federal agencies, but federal officials stated that the effectiveness of these activities is hindered by staffing and resource shortages. Specifically, NTIA has directed federal agencies to use only as much spectrum as they need and has established frequency assignment and review processes that place primary responsibility for promoting efficiency in the hands of the agencies. As an accountability measure, NTIA requires that agencies justify their initial need for a frequency assignment and periodically review their continued need for the assignment, generally every 5 years.
CMS administers the Medicare program with the assistance of about 50 claims administration contractors. As part of their duties, contractors deny claims that are the responsibility of other insurers. In addition, they are required to recover mistaken payments that were made before it could be determined that the beneficiary had other insurance—such as an EGHP, an automobile or other liability insurance plan, workers’ compensation, or other types of coverage. To ensure that contractors adequately perform these tasks, CMS periodically monitors and evaluates their performance. Contractors are required to record recovery information pertaining to EGHP debt cases in the MPaRTS database. MPaRTS tracks the status of each EGHP case and provides CMS with information on the amount of mistaken payments identified, the amount demanded to be repaid, the amount recovered, and whether the case is currently open or closed. Although CMS does not have a database for tracking liability and workers’ compensation cases that is comparable to MPaRTS, CMS requires contractors to submit quarterly accounts receivable reports for these and other types of cases. These reports show the aggregate amount of outstanding debt, but do not provide detail at the individual case level. To prevent mistaken MSP payments, Medicare claims administration contractors match beneficiaries’ health care claims against information contained in Medicare’s Common Working File (CWF)—a repository of claims and beneficiary enrollment data—to determine whether Medicare is the primary or secondary payer. Claims are paid if the CWF indicates that Medicare is the primary payer. However, the CWF may not always contain accurate information. The MSP status of some beneficiaries is sometimes in a state of flux—for example, a retired beneficiary may return to the workforce and receive coverage under an EGHP for 6 months, and then leave that job. This information may not be recorded in a timely manner, leading to mistaken payments. In addition, the CWF can also contain inaccurate information if beneficiaries do not notify CMS of their insurance status when they become eligible for Medicare or if they provide incorrect insurance information. Furthermore, although the CWF is periodically updated with new insurance information, there is a lag between the time beneficiaries obtain coverage and when CMS learns of this coverage. In the interim, contractors may mistakenly pay beneficiaries’ claims. To identify mistaken MSP payments when an EGHP is the primary payer, claims administration contractors use information provided by CMS and the Coordination of Benefits Contractor (COBC). The COBC is a specialized contractor that does not process Medicare claims. Instead, the COBC is charged with developing information on beneficiaries who may have other primary health insurance through a process known as the data match. The purpose of the data match is to identify beneficiaries or their spouses who are employed and thus may be covered by an EGHP. To facilitate data matching, the Social Security Administration sends the Internal Revenue Service a list containing the Social Security numbers of Medicare beneficiaries. The Internal Revenue Service then matches the list against beneficiary income tax return data and sends the results to the COBC for further analysis. For example, if tax records show that an employer paid a beneficiary at least $10,000 during the previous year, the COBC would contact the beneficiary’s employer to determine whether he was covered by that employer’s group health plan. CMS compares information developed by the COBC to the national claims history file, the most comprehensive source of paid claims information. This comparison allows CMS to determine whether Medicare may have mistakenly paid claims on behalf of the beneficiary. If the mistakenly paid claims total at least $1,000, CMS assigns the case to the claims administration contractor that processed and paid the claims. Upon receipt of the EGHP debt case, claims administration contractors have 60 days to perform certain tasks to determine whether an attempt should be made to recover the debt. The contractor must first verify that the information being used as a basis for recovering the debt is correct and that it has not already recouped the mistaken payments. If the case passes this initial validation process, the contractor will initiate recovery by sending a demand letter to the beneficiary’s employer and insurance company or third-party administrator, requesting payment within 60 days. If there is no response to the demand letter within 60 days, interest begins to accrue on the debt. Contractors then send a second letter explaining that if a response or payment is not received within another 60 days, the matter will be referred to the Department of the Treasury for collection. Responses to these letters can include repayment with interest or an explanation as to why the employer and associated health insurer are not responsible for the debt. This explanation may include documentation indicating that the employee retired and thus discontinued health coverage or never obtained coverage through the employer. The procedures followed by contractors to recover mistaken payments from liability insurers and workers’ compensation plans differ from those used when the primary payer of an MSP debt is an EGHP. In a liability or workers’ compensation case, mistaken payments made on behalf of a beneficiary are not related to a period of insurance coverage, but to a particular incident—for example, an automobile accident or workplace injury. The task of the contractor in such cases is to identify all paid medical claims related to the incident and to inform the beneficiary or the beneficiary’s attorney of the responsibility to repay Medicare in the event that they receive an insurance settlement for their medical expenses. Because beneficiaries may require protracted medical treatment for their injuries, it may take several years before the total amount of payments related to the injury is known. In the interim, a contractor may repeatedly review the beneficiary’s claims history to determine whether Medicare has paid new claims related to the injury. We previously reported that CMS maintained a substantial backlog of uncollected debt in fiscal year 2000. Although the Debt Collection Improvement Act of 1996 required that agencies refer debt delinquent for more than 180 days to the Department of the Treasury, CMS still had not fully implemented this requirement. Prior to 2000, CMS did not instruct claims administration contractors to refer delinquent EGHP cases to the Department of the Treasury for collection. As a result, CMS maintained a substantial backlog of older cases that remained open, but inactive, for many years. CMS’s administration of the Medicare program will undergo significant changes over the next several years as the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) is implemented. MMA provides CMS with increased flexibility in contracting with new entities to assist it in operating the Medicare program. While CMS has relied primarily on the claims administration contractors to perform most of the key business functions of the program, the new law authorizes CMS to enlist a variety of contractors to perform these tasks. For example, CMS could use new contractors to process and pay claims and to perform financial management and payment safeguard activities. CMS is just beginning to develop plans to implement MMA’s contracting reform provisions. Phase-in of the amendments to contracting reform takes effect on October 1, 2005. The competitive bidding of all contracts is required for contract periods that begin on or after October 1, 2011. The agency expects to issue its implementation plan for contracting by October 1, 2004. Since fiscal year 2000, the cost-effectiveness of EGHP recovery activities has significantly declined. The decline in cost-effectiveness occurred because the volume of EGHP debt cases significantly decreased—in fiscal year 2003, almost half of the contractors were assigned fewer than 50 cases—while, at the same time, the cost to CMS for maintaining debt collection capabilities at all claims administration contractors increased slightly. Moreover, CMS funded eight contractors who were not assigned any EGHP debt cases. The recovery process is also constrained by procedures that prevent contractors from maximizing their recoveries of mistaken payments. Because contractors have access only to claims that they have paid, they cannot identify, and thus collect, mistaken payments made by other contractors. In addition to these structural problems, we found that in 3 of the last 4 years CMS did not transmit a substantial number of EGHP cases to the claims administration contractors, resulting in missed recoveries. EGHP recovery activities are no longer cost-effective. To measure cost- effectiveness, we compared the amount that CMS spent on contractor recovery activities for a given fiscal year with the amount recovered from all cases that were opened during the same year—regardless of when the funds were recovered. While Medicare recovered about $2.49 for each dollar it spent on EGHP recovery activities in fiscal year 2000, this ratio declined to $1.80 in 2001. Although there are no comparable data for fiscal year 2002 because CMS did not open any new EGHP cases that year, thus allowing contractors time to reduce their backlog of old cases, the decline in cost-effectiveness continued in fiscal year 2003 when CMS resumed opening new EGHP cases. In that year, Medicare lost money on EGHP recovery activities, recovering only 38 cents for every dollar spent. (See table 1.) The lack of cost-effectiveness of the EGHP recovery process resulted partly from a declining workload, which limited the potential for recovery. The number of new MSP EGHP debt cases has decreased by more than 80 percent in recent years, from 49,240 cases in fiscal year 2000 to 7,634 cases in fiscal year 2003. CMS officials told us that improvements in identifying beneficiaries with other insurance before a claim is paid have reduced the number of mistakenly paid MSP claims. Consequently, according to CMS officials, this has lessened the need to recover these payments via the EGHP recoveries. These officials also projected that the number of EGHP cases assigned to contractors could continue to decline. Not only have the number of EGHP cases declined since fiscal year 2000, but the complexity of these cases and the resources required to process many of them have also decreased. Since fiscal year 2000, the claims administration contractors closed more than half of the cases during their initial computer screening process. That is, they often found that the mistaken payments totaled less than $1,000, another insurer voluntarily paid the claims, or the COBC updated the CWF to show that the beneficiary did not have other primary coverage, such as an employer- sponsored group health plan, during the time the services were delivered. In such instances, contractors are not required to correspond with employers and insurers. It is only a relatively smaller number of cases— those that pass the initial screening process—that require significant contractor resources to send demand letters, process the responses, and archive file materials. As shown in figure 1, of the 49,240 EGHP cases processed by contractors in fiscal year 2000, 20,487—about 42 percent— were resource-intensive cases that entailed sending a demand letter. In contrast, only 1,276 cases—about 17 percent—involved a demand letter in fiscal year 2003. CMS’s payments to contractors for recovery activities have not reflected the sharp decline in the number of EGHP debt cases that occurred in fiscal year 2003. For example, in fiscal year 2000, the three contractors with the largest workloads received a combined budget of less than $1 million and processed 7,708 EGHP cases. The workload of those three contractors was larger than the entire fiscal year 2003 workload, for which CMS spent almost $10 million on contractors’ EGHP debt recovery activities. This disparity between workload and budget in fiscal year 2003 is even more apparent at the individual contractor level. As shown in table 2, 8 of the 51 claims administration contractors processed 400 or more EGHP cases—representing about 52 percent of the total EGHP workload of 7,634 cases. However, almost half of the contractors were assigned fewer than 50 cases. Despite their small combined workload—4 percent of all EGHP cases in fiscal year 2003—CMS allocated to these contractors more than a quarter of its EGHP budget, about $2.5 million, to support EGHP and certain other recovery activities. Moreover, CMS funded 8 contractors that were not assigned any EGHP debt cases. CMS’s budget process does not efficiently match funding for contractor recovery activities to contractors’ actual workloads. CMS pays each contractor to maintain an infrastructure to support the recovery of EGHP debt, regardless of the number of cases the contractor processes during the year. In order to process EGHP cases forwarded to them by CMS, the claims administration contractors maintain an infrastructure that results in costs such as wages, equipment, and records. Typically, this includes a staff of MSP examiners who review EGHP cases, contact other potential insurers, evaluate explanations from insurers as to why the MSP debt may not be valid, make referrals to the Department of the Treasury when a debt is not paid within 180 days, and archive case files. Each contractor must also maintain screening software to identify and exclude EGHP debt cases that do not meet the $1,000 threshold. As a result, some contractors may receive funding for their infrastructures even though they process few or no cases during the year, as occurred in fiscal year 2003. In comparison to other MSP activities performed by contractors—such as maintaining computer programs that automatically identify and deny MSP claims—EGHP recoveries are expensive to conduct and no longer provide a return on investment. In fiscal year 2003, the return on investment for all types of MSP activities combined was 48 to 1. That is, Medicare contractors spent an estimated $95.6 million for all MSP activities and produced identifiable savings of approximately $4.6 billion, resulting in $48 saved for every dollar spent. We found that several system limitations create barriers to recovering mistaken payments and reduce program savings. Some mistakenly paid claims may be missed because beneficiaries received medical services in more than one state, and thus had their claims processed by more than one contractor. Because contractors have access only to claims records that they process, they are unable to identify claims processed by other contractors. In addition, beneficiaries whose total MSP claims exceed $1,000, but are split among two or more contractors, may not have all of their mistaken payments recovered if the payments made by any single contractor total less than the $1,000 threshold. Although CMS officials could not quantify the effect of these constraints on recoveries, they told us that they believe that these limitations have significantly reduced MSP savings. For example, a beneficiary who lives in the Midwest but spends the winter in the South and receives health care services in both locations will have claims processed by different contractors. If mistaken payments for $2,000 were made for services the beneficiary received during the year—for example, $1,200 in one location and $800 in the other—only the contractor with payments exceeding the threshold would pursue a recovery. Therefore, although the primary payer would be responsible for the entire $2,000 in services, Medicare would attempt to recover only a portion of the amount owed. A similar inefficiency occurs when beneficiaries receive inpatient services covered by Part A of Medicare and physician services covered by Part B. Different contractors typically process Part A and Part B claims, but they are not required to coordinate EGHP recoveries with one another. This lack of coordination also results in missed savings opportunities when neither the Part A nor Part B claims individually meet the $1,000 threshold. Even if both the Part A and Part B claims exceed this threshold, greater administrative costs are incurred by both CMS and private employers, as two different contractors attempt to recoup payments from the same payer. Finally, the success of the current system depends on CMS distributing EGHP cases to the claims administration contractor that processed the mistaken payments. Our review of EGHP debt cases revealed that, during fiscal years 2000, 2001, and 2003, CMS neglected to transmit 2,364 cases to the contractors, representing more than $28 million in potential mistaken payments. CMS officials told us that the accurate referral of EGHP cases has grown more difficult in recent years as some contractors have left the Medicare program and other contractors subsequently assumed their existing workload. They explained that they suspected that these EGHP cases were overlooked when one contractor processing claims for beneficiaries in several states left the program and the related cases were never assigned to the replacement contractors. As a result, no recovery action was ever initiated for these cases. By using the percentage of potential mistaken payments that are typically recovered—7 percent—we estimate that CMS’s failure to transmit these cases to contractors for potential recovery cost the Medicare program approximately $2 million. We were unable to fully evaluate the effectiveness of the EGHP debt recovery efforts of the claims administration contractors we visited because three of the four contractors were unable to produce all of the case files we requested. Although the files we examined indicated that these contractors were appropriately managing their EGHP workload, the volume of unavailable files precluded us from reaching an overall conclusion on their performance. CMS’s recent contractor performance evaluations found similar records management deficiencies and raised additional questions about contractors’ effectiveness. We found it difficult to thoroughly assess the performance of all of the contractors we visited. At each contractor, we randomly selected a sample of cases to review. The number selected varied by contractor and totaled 644 cases for all contractors combined. However, 78 case files could not be located. Although one contractor was able to produce the files and supporting documentation for all the cases we requested, the other three contractors poorly managed their records and were unable to provide all of the files and supporting documentation we had requested in advance of our visits. The percentage of missing cases at these contractors ranged from 4 to 24 percent. Because these files were not available, we were unable to fully assess whether the contractors made sufficient efforts to collect MSP debt. For example, without supporting documentation for those cases, we could not conclusively determine that the contractors had followed all the appropriate recovery procedures. Of the 566 cases available for review, we found that contractor files were complete and contained appropriate documentation to support the contractor’s decision to close each case without making a recovery. We reviewed two types of cases: those that were closed during the initial screening process after the contractor determined that the $1,000 threshold was not met, and those that were closed after the contractor sent a demand letter to the employer requesting payment. Together, these two types of cases constituted about 65 percent of the EGHP workload during fiscal years 2000 and 2001. For cases that were closed because they did not meet the $1,000 threshold, contractors provided us with adequate supporting documentation showing that the involved claims totaled less than this amount. Other cases were properly closed because the employers provided valid reasons as to why they were not responsible for the MSP debt. For example, if a beneficiary had retired and was not covered by the employer’s insurance at the time the claims were submitted, contractor case files contained correspondence from the employer documenting this fact. In about a third of the MSP cases we selected for review, the private side of the contractor’s business sold insurance to the employer that was initially identified as having responsibility for MSP debt. Although this situation creates a potential conflict of interest for the contractor because it must collect funds from its private business side, we did not find evidence that contractors closed such cases inappropriately or treated them differently from others. Our review also found that one contractor made errors entering information into CMS’s MPaRTS system, which tracks the status of EGHP cases. Although such errors do not mean that the contractor had inappropriately processed cases, they make it difficult for CMS to monitor the cases’ status. The tracking system uses different codes to describe the status of MSP cases. For example, there is a code to indicate that the case was closed after a demand letter was sent, and another to indicate that the case was closed because the $1,000 recovery threshold was not met. This contractor did not correctly apply these two codes and miscoded about 18 percent of the cases we reviewed. CMS’s recent contractor performance evaluations of MSP recovery activities support our finding of poor records management. CMS evaluated the MSP activities of 12 contractors in fiscal year 2001 and another 12 contractors in fiscal year 2002. During these evaluations, CMS reviewed EGHP case files from contractors. These evaluations are based on a relatively small number of case files—10 to 20—and therefore do not provide in-depth assessments of contractors’ performance. However, the evaluations conducted in 2001 and 2002 highlighted contractor performance problems similar to those we identified. That is, CMS found that several contractors, which included some that were not part of our review, had missing case files and entered inaccurate information into the CMS tracking database. For example, during a review of one contractor, CMS requested 20 EGHP case files, but the contractor was able to locate only 12 files. In addition, CMS found tracking-system coding errors—in 2001, 5 of the 12 contractors reviewed did not use the correct status code when entering information into the CMS computer system that tracks the status of EGHP cases. CMS evaluations identified additional problems in fiscal years 2001 and 2002, suggesting other weaknesses in contractors’ MSP recovery activities, as illustrated by the following examples: Staffing problems. One contractor discontinued processing data match cases for 3 months when the sole staff member performing this task took an extended leave of absence. At another contractor, CMS determined that the number of staff assigned to MSP recoveries was insufficient to process the contractor’s large workload. CMS also noted that a contractor had recently changed the educational requirements for MSP staff. Because most of the current staff did not possess a college degree as required by the contractor’s revised standard, the contractor retained an almost entirely new MSP staff. The new staff told CMS reviewers that their training was inadequate to prepare them for processing the workload. Delays in processing correspondence. In examining documentation at one contractor, CMS reviewers identified a significant backlog of correspondence. According to CMS’s estimate, there were over 2,400 pieces of mail awaiting action—including checks and correspondence from employers, insurers, and other contractors. The oldest correspondence awaiting action was more than 2 years old—well beyond CMS’s requirement that contractors match incoming mail with established cases and respond to such correspondence within 45 days. Failure to appropriately document case determinations. At one contractor, CMS reviewers found several case files where the contractor did not document whether the action was necessary. For example, the contractor closed a case and indicated that a full recovery was made; however, the file did not show that a check was received from either an employer or insurer. At another contractor, CMS reviewers examined cases that were inappropriately closed without recovery because the contractor had not promptly notified the EGHP of the debt, as required. In this instance, CMS found that once the contractor recognized its own untimeliness, it erred again by closing these cases without confirming that the health plan’s time limit for accepting claims had, in fact, expired. Inadequate security measures. Because the recovery process partially relies on Internal Revenue Service tax information, contractors are required to take certain precautions to prevent unauthorized access. At one contractor, CMS found that the workstation of the person responsible for processing the EGHP workload was situated next to the workstations of staff who did not have authorization to access restricted tax information. Reviewers found that files were stored in unlocked file cabinets and that sensitive printed materials were left in plain view in a general work area, rendering the information easily accessible to anyone in the facility. Recognizing the need to improve the coordination of its MSP recovery efforts, CMS contracted for the development of a new recovery system— the Recovery Management and Accounting System (ReMAS)—in 1998. The purpose of ReMAS is to improve the identification, tracking, and recovery of mistaken payments. ReMAS was designed to enhance the MSP recovery process by automating some tasks performed manually and by reducing the time required to collect MSP debt. As of May 2004, CMS has deployed the liability insurance and workers’ compensation component of ReMAS to nine contractors. ReMAS is designed to receive and evaluate leads from CWF electronically, a function that is now performed in separate steps by CMS staff and individual claims administration contractors. These leads consist of information suggesting that a beneficiary has other coverage that should be primary. CMS officials claim that ReMAS will streamline other functions as well. For example, when new information on a beneficiary’s MSP status is added to CWF, ReMAS is expected to determine, on a daily basis, whether mistaken payments were made on his or her behalf. Currently, the contractors review the occurrence of mistaken payments at varying intervals ranging from quarterly to semiannually. Once ReMAS determines that Medicare has paid claims that were the primary responsibility of another insurer, it will generate a case that can be assigned to any contractor for recovery. It will no longer be necessary for the contractor that processed the mistakenly paid claims to perform recovery activities. CMS officials told us that they believe that ReMAS will have several advantages over the current process. First, efficiencies gained through ReMAS would enable contractors to pursue MSP debt that involves amounts less than the current $1,000 threshold, resulting in additional recoveries. Second, ReMAS could facilitate the consolidation of MSP debt recovery efforts among a handful of contractors, as each contractor would have access to all paid claims. CMS officials indicated that ReMAS would enable them to reduce administrative costs, provide contractors with a more consistent and predictable workload, and simplify contractor oversight activities. (See app. II for more information comparing ReMAS to the present recovery system). Although CMS has spent $7 million on the development of this system, which has now spanned 6 years, ReMAS’s implementation is progressing slowly. It remains in the early implementation stages—testing on EGHP cases started in June 2004. Several critical tasks related to ReMAS’s implementation have taken several years to complete. To date, only the initial software testing and validation for the liability and workers’ compensation components have been completed. CMS’s initial plans for implementing ReMAS have focused on recovering liability insurance and workers’ compensation debt. Thus far, 17 contractors have received training in the use of ReMAS. CMS officials told us that as of May 2004, the liability and workers’ compensation components of ReMAS have been deployed to nine contractors. The remaining contractors that process MSP liability cases are scheduled to implement ReMAS by October 2004. ReMAS also has the potential to recover mistaken payments associated with EGHPs—currently handled through the data match process. CMS recently expanded the scope of ReMAS to include employer-sponsored group health plans, but details related to incorporating EGHP cases in the system are unclear. Unlike liability and workers’ compensation cases, which are related to specific accidents or injuries, EGHP cases are based on a beneficiary’s dates of employer-sponsored coverage. This distinction requires enhancements to the ReMAS system, to ensure that it can address and process this key difference. According to CMS’s timetable, preliminary tasks such as computer testing, validation, and documentation of the EGHP component of ReMAS will be completed in September 2004. While CMS expects to pilot test the EGHP component with two contractors in October 2004, it has not specified when it will implement ReMAS for EGHP cases at all contractors. As Medicare’s primary steward, CMS should make a concerted effort to recoup funds owed the program. However, recovery efforts should be planned and executed with cost-effectiveness in mind. CMS’s efforts to recover MSP debt from cases that involve EGHPs were cost-effective as recently as a few years ago, but CMS is now operating a recovery system that is losing money. Although funding for contractors’ EGHP debt recovery activities has slightly increased since fiscal year 2000, contractor workloads have decreased by 80 percent. In addition, funding for these activities is not always related to contractors’ workloads—in fiscal year 2003, almost half of the contractors received fewer than 50 cases to process while 8 of these, which had a collective budget of more than $1.8 million, received no cases at all. As recently as fiscal year 2000, three contractors collectively processed a workload that exceeded the entire EGHP workload of all contractors in fiscal year 2003, suggesting that consolidation of debt recovery activities among a smaller number of contractors is feasible. The current system, with over 50 contractors involved in EGHP recovery activities, is cumbersome to administer, and poor record-keeping makes it difficult to determine whether contractors are doing all they can to recover debt. One of the keys to improving the cost-effectiveness of MSP debt recoveries may rest with CMS’s new ReMAS system. Plans to expand the scope of ReMAS to recover debt associated with employer-sponsored group health plans could ultimately address current operational weaknesses, such as an inefficient distribution of workload and limited coordination among contractors. Now that CMS has been given new authority to contract with a variety of entities to assist it with managing the Medicare program, it should take advantage of ReMAS’s capability to consolidate debt recovery efforts with a smaller number of contractors and thereby improve the efficiency of the program. We recommend that the administrator of CMS: develop detailed plans and time frames for expanding ReMAS to include EGHP cases, and expedite implementation of the EGHP component of ReMAS and improve the efficiency of MSP payment recovery activities by consolidating the EGHP workload under a smaller number of contractors and ensuring that contractor budgets for EGHP recovery activities more closely reflect their actual workloads. In written comments on a draft of this report, CMS agreed with our recommendations. CMS said it recognizes the importance of improving the cost-effectiveness of its debt collection process and has taken steps to expedite implementation of the EGHP component of ReMAS. CMS stated that operational efficiencies gained through the implementation of ReMAS make it feasible to consolidate recovery activities. CMS’s comments are reprinted in appendix III. CMS also provided us with technical comments, which we incorporated as appropriate. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance. At that time, we will send copies to the Administrator of CMS and other interested parties. We will then make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (312) 220-7600. An additional GAO contact and other staff who made contributions to this report are listed in appendix IV. To assess the cost-effectiveness of the current system for recovering Medicare Secondary Payer (MSP) debt, we analyzed information from two CMS databases—the Contractor Administrative-Budget and Financial Management (CAFM) system and the Mistaken Payment and Recovery Tracking System (MPaRTS). CAFM provided information on CMS’s budgets for contractors and MPaRTS provided information on the number of potential MSP recovery cases processed by contractors and the amount of savings from recovery activities. To evaluate contractor performance in recovering MSP debt, we focused on cases that involved beneficiaries and their spouses who may have been employed and covered by an employer-sponsored group health plan (EGHP). These cases consisted of potentially mistakenly paid claims for services a beneficiary appeared to have received while covered by an EGHP. We selected 4 geographically dispersed contractors that processed a high volume of EGHP debt cases—all 4 were among the top 10 contractors that processed the highest number of such cases in 2000 and 2001. At each contractor, we randomly selected a sample of cases that were opened in 2000 and 2001 for review—the number of cases selected at each contractor varied, ranging from 136 to 207. Of the 644 cases selected, 566 were available for review. Contractors were unable to provide documentation for 78 cases. Because contractors close the majority of cases without making recoveries, we specifically focused on such cases in order to determine whether contractors made sufficient effort to recover MSP debt and followed appropriate procedures. Our inspection of these files consisted of reviewing contractor adherence to CMS’s detailed procedures for steps taken during the recovery process and the sufficiency of the contractor’s documentation for closing data match cases without recovering funds or referring cases to the Department of the Treasury for collection. All four of the Medicare contractors we examined sold private health insurance. Because of the possibility that the private side of their businesses could have been responsible for reimbursing Medicare for MSP debt, our examination included an assessment of whether this potential conflict of interest affected contractors’ actions in collecting this debt. Using insurer information available from MPaRTS and contractor case files, we identified cases that involved the contractor’s private health insurance business and compared them to the other cases. Our analysis found little difference between the two types of cases in terms of missing documentation—12.0 percent of cases that involved the contractor’s private side health insurance business were not documented, compared with 12.1 for the other cases. To assess CMS efforts to oversee and improve MSP debt recovery, we reviewed program guidelines and memoranda and interviewed officials from CMS and Medicare contractors. To identify contractor performance problems, we also examined the results of CMS’s fiscal years 2001 and 2002 contractor performance evaluations pertaining to contractors’ MSP operations. Although we did not validate CMS’s CAFM and MPaRTs information, CMS has procedures in place to ensure the accuracy of these databases. The MPaRTs database, which tracks MSP debt recoveries from EGHPs, contains internal logic checks that prevent contractors from incorrectly entering certain types of information. In addition, CMS periodically reviews MPaRTs records as part of its contractor performance evaluations. CAFM is a financial management system established to enable CMS to control the national budget for the Medicare contractors. It contains a small number of system checks that ensure that expenditure information provided by contractors is totaled correctly. The reliability of the data is ensured through independent audits. In addition, CMS personnel also review the data throughout the year. To identify the agency’s efforts to enhance the MSP process, we reviewed documents and interviewed CMS officials on CMS’s planned Recovery, Management and Accounting System (ReMAS), a new CMS system for MSP debt recovery activities that is under development. We conducted our work from December 2002 through July 2004 in accordance with generally accepted government auditing standards. The following table highlights differences between the way MSP case development, validation, and recovery are implemented under the present data match recovery system and how they will be implemented under ReMAS. Major contributors to this report were Richard M. Lipinski, Barbara Mulliken, Enchelle Bolden, Shaunessye Curry, and Kevin Milne.
Last year, employer-sponsored group health plans (EGHP) were responsible for most of the nearly $183 million in outstanding Medicare secondary payer (MSP) debt. MSP debts arise when Medicare inadvertently pays for services that are subsequently determined to be the financial responsibility of another. The Centers for Medicare & Medicaid Services (CMS) administers Medicare with the assistance of about 50 contractors that, as part of their duties, are required to recover MSP debt. GAO was asked to determine whether Medicare contractors are appropriately recovering MSP debt. GAO (1) assessed the cost-effectiveness of the current debt recovery system and (2) identified CMS's plans to enhance the recovery process. GAO analyzed workload and budget information and assessed plans to develop a new debt recovery system--the Recovery Management and Accounting System (ReMAS). Medicare's system for recovering MSP debt from EGHPs is no longer cost-effective, with CMS recovering only 38 cents for every dollar it spent on recovery activities in fiscal year 2003. This is largely due to workload and budgetary factors. While the number of new debt cases referred to contractors has declined by more than 80 percent since fiscal year 2000, CMS's budget for contractor recovery activities has remained relatively unchanged. As a result, contractors were funded at a level that exceeded their workload. Almost half of the contractors that CMS funded to process the 7,634 cases associated with the fiscal year 2003 workload were assigned fewer than 50 cases--and eight were not assigned any. The current system is also constrained by procedures that prevent contractors from maximizing recoveries. For example, CMS has instructed contractors not to pursue cases in which the amount of mistaken payments made on behalf of the same beneficiary is less than $1,000. In addition, CMS neglected to transmit more than 2,000 cases to the contractors--which depend on these transmittals to initiate recoveries--during fiscal years 2000, 2001, and 2003. CMS is developing a new recovery system--ReMAS--to enhance the MSP recovery process. This system has the potential to help increase savings, provide CMS with greater flexibility in distributing the workload, and simplify the collection of MSP debt. ReMAS is designed to identify relevant mistaken payments and will generate a case that can be assigned to any contractor for recovery--not only the contractor that processed the mistakenly paid claims. However, ReMAS has been under development for over 6 years and is currently only being used for liability and workers' compensation recoveries by a fraction of the contractors. Pilot testing of ReMAS on EGHP cases will not begin until October 2004.
The purpose of the HUBZone program, established by the HUBZone Act of 1997, is to stimulate economic development in economically distressed communities (HUBZones) by providing federal contracting preferences to eligible small businesses. The types of areas in which HUBZones may be located are defined by law and consist of census tracts, nonmetropolitan counties, Indian reservations, redesignated areas (that is, census tracts or nonmetropolitan counties that no longer meet the criteria but remain eligible until after the release of the first results from the 2010 census or 3 years after they ceased being qualified), and base closure areas. To be certified to participate in the HUBZone program, a firm must meet the following four criteria: must be small by SBA size standards; must be at least 51 percent owned and controlled by U.S. citizens; principal office—the location where the greatest number of employees perform their work—must be located in a HUBZone; and at least 35 percent of the full-time (or full-time equivalent) employees must reside in a HUBZone. The Veterans Benefits Act of 2003, which established the service-disabled veteran-owned small business program, permits contracting officers to award set-aside and sole-source contracts to any small business concern owned and controlled by one or more service-disabled veterans. Veteran means a person who served in the active military services, and who was discharged or released under conditions other than dishonorable. Service- disabled means that the disability was incurred or aggravated in the line of duty in active service. A firm also must qualify as a small business under the North American Industry Classification System (NAICS) industry-size standards. A firm must meet several initial eligibility requirements to qualify for the 8(a) program (a process known as certification), and then meet other requirements to continue participation. A concern meets the basic requirements for admission to the program if it is a small business that is unconditionally owned and controlled by one or more socially and economically disadvantaged individuals who are of good character and U.S. citizens, and demonstrates the potential for success. Our work involving 80 economic development programs at four agencies—Commerce, HUD, SBA, and USDA—indicates that the design of each of these fragmented programs appears to overlap with that of at least one other program in terms of the economic development activities that they are authorized to fund. For example, as shown in table 1, the four agencies administer a total of 54 programs that can fund “entrepreneurial efforts,” which include helping businesses to develop business plans and identify funding sources. SBA accounts for 19 of these 54 programs, and it administers programs contained in six of the nine economic activities. (The 19 SBA programs are listed in the table in appendix I.) Our prior work going back more than 10 years also identified potential overlap and fragmentation in economic development programs. Among other things, we found that legislative or regulatory restrictions that target funding on the basis of characteristics such as geography, income levels, and population density (rural or urban) differentiated many programs. While some of the 80 programs we assessed fund several of the nine economic development activities, almost 60 percent (46 of 80) fund only one or two activities. These smaller, narrowly scoped programs appear to be the most likely to overlap because many can only fund the same, limited types of activities. For example, narrowly scoped programs comprise 21 of 54 programs that can fund entrepreneurial efforts. Moreover, most of the 21 programs target similar geographic areas. To address issues arising from potential overlap and fragmentation in economic development programs, we previously have identified collaborative practices agencies should consider using to maximize the performance and results of federal programs that share common outcomes. These practices include leveraging physical and administrative resources, establishing compatible policies and procedures, monitoring collaboration, and reinforcing agency accountability for collaborative efforts through strategic or annual performance plans. Preliminary findings from our ongoing work show that Commerce, HUD, SBA, and USDA appear to have taken actions to implement some of the collaborative practices, such as defining and articulating common outcomes, for some of their related programs. However, the four agencies have offered little evidence so far that they have taken steps to develop compatible policies or procedures with other federal agencies or searched for opportunities to leverage physical and administrative resources with their federal partners. Moreover, we found that most of the collaborative efforts performed by program staff on the front line that we have been able to assess to date have occurred only on a case-by-case basis. As a result, the agencies do not appear to be consistently monitoring or evaluating these collaborative efforts in a way that allows them to identify areas for improvement. We reported in September 2008 that the main causes for limited agency collaboration include few incentives to collaborate and lack of a guide on which agencies could rely for consistent and effective collaboration. In that same report, we recommended that SBA and USDA take steps to adopt a formal approach to encourage further collaboration. To date, the two agencies have entered into a memorandum of understanding and USDA has recently taken some action to monitor the collaborative efforts of its field office staff. In failing to find ways to collaborate more, agencies may miss opportunities to leverage each other’s unique strengths to more effectively promote economic development and efficiently use taxpayer dollars set aside for that purpose. In addition, a lack of information on program outcomes has been a long- standing concern. This information is needed to determine if potential overlap and fragmentation has resulted in ineffective or inefficient programs. More specifically: Commerce’s Economic Development Administration (EDA), which administers eight of the programs we reviewed, continues to rely on a potentially incomplete set of variables and self-reported data to assess the effectiveness of its grants. This incomplete set of variables may lead to inaccurate claims about program results, such as the number of jobs created. Moreover, in only limited instances have EDA staff requested documentation or conducted site visits to validate the self-reported data provided by grantees. We first reported on this issue in March 1999 and issued a subsequent report in October 2005. In response to a recommendation we made in 2005, EDA issued revised operational guidance in December 2006 that included a new methodology that regional offices were to use to calculate estimated jobs and private-sector investment attributable to EDA projects. However, during our recently- completed review we found that the agency still primarily relies on grantee self-reported data and conducts a limited number of site visits to assess the accuracy of the data. While acknowledging these findings, EDA officials stated that they do employ other verification and validation methods in lieu of site visits. These methods include reviews to ensure the data are consistent with regional trends and statistical tests to identify outliers and anomalies. SBA has not yet developed outcome measures that directly link to the mission of its HUBZone program, or implemented its plans to evaluate the program based on variables tied to program goals. We reported in June 2008 that while SBA tracks a few performance measures, such as the number of small businesses approved to participate in the program, the measures do not directly link to the program’s mission. Therefore, we recommended that the agency further develop measures and implement plans to assess the effectiveness of the program. While SBA continues to agree that evaluating the outcomes of the HUBZone program is important, to date the agency has not yet committed resources for such an evaluation. The USDA’s Office of Rural Development, which administers 31 of the programs we reviewed, has yet to implement the USDA Inspector General’s (IG) 2003 recommendation on ensuring that data exist to measure the accomplishments of one of its largest rural business programs—the Business and Industry loan program, which cost approximately $53 million to administer in fiscal year 2010. USDA officials stated that they have recently taken steps to address the IG’s recommendation, including requiring staff to record actual jobs created rather than estimated jobs created. However, an IG official stated that these actions are too recent to determine whether they will fully address the recommendation. Without quality data on program outcomes, these agencies lack key information that could help them better manage their programs. In addition, such information would enable congressional decision makers and others to make decisions to better realign resources, if necessary, and identify opportunities for consolidating or eliminating some programs. Building on our past work, we are in the planning phase of a new, more in- depth review that will focus on a subset of these 80 programs, including a number of SBA programs. We plan to evaluate how funds are used, identify additional opportunities for collaboration, determine and apply criteria for program consolidation, and assess how program performance is measured. More generally, as the nation rises to meet the current fiscal challenges, we will continue to assist Congress and federal agencies in identifying actions needed to reduce duplication, overlap, and fragmentation; achieve cost savings; and enhance revenues. As part of current planning for our future annual reports, we are continuing to look at additional federal programs and activities to identify further instances of duplication, overlap, and fragmentation as well as other opportunities to reduce the cost of government operations and increase revenues to the government. We will be using an approach to ensure governmentwide coverage through our efforts by the time we issue of our third report in fiscal year 2013. We plan to expand our work to more comprehensively examine areas where a mix of federal approaches is used, such as tax expenditures, direct spending, and federal loan programs. Likewise, we will continue to monitor developments in the areas we have already identified. Issues of duplication, overlap, and fragmentation will also be addressed in our routine audit work during the year as appropriate and summarized in our annual reports. As GAO has reported, three small business programs have had varying degrees of internal control weaknesses that affected program oversight. For example, in a June 2008 report, GAO determined that SBA’s mechanisms for certifying and monitoring firms in the HUBZone program gave limited assurance that only eligible firms participated. In our June 2008 report on the HUBZone program, we found that (1) SBA’s mechanisms for certifying and monitoring firms provided limited assurance that only eligible firms participated in the program and (2) the agency had not evaluated the effectiveness of the program. Specifically, for certification and recertification, firms self-reported information on their applications and SBA requested documentation or conducted site visits of firms to validate the self-reported data in limited instances. Our analysis of the 125 applications submitted in September 2007 showed that SBA requested supporting documentation for 36 percent of the applications and conducted one site visit. To address these deficiencies, we recommended that SBA develop and implement guidance to more consistently obtain supporting documentation upon application and conduct more frequent site visits to help ensure that firms applying for certification were eligible. SBA has made some progress in better ensuring that participating firms are eligible for the HUBZone program. According to agency officials, SBA conducted 911 site visits to certified firms in fiscal year 2009 and made 1,142 site visits in fiscal year 2010. In March 2010, SBA issued a guide for analysts to use when reviewing applications to help ensure a standardized and more efficient review of applications. The guidance provides examples of the types of documentation that SBA staff should collect from applicants and also offers tips for identifying fraudulent claims and documents. We also reported that SBA had not followed its policy of recertifying firms (the process through which SBA can monitor firms’ continued eligibility) every 3 years and as a result had a backlog of more than 4,600 firms that had gone unmonitored for more than 3 years. We recommended that the agency eliminate the backlog and take the necessary steps to better ensure recertifications were completed in a more timely fashion. In September 2008, SBA eliminated the backlog by hiring more staff. The agency recently provided us with a flow chart that describes the most recent steps they had taken to recertify firms in a timely manner and the resources that they planned to dedicate to this effort. Finally, as discussed previously, we found that SBA had not implemented plans to assess the effectiveness of the HUBZone program and recommended that SBA develop performance measures and implement plans to do so. In August 2008, SBA issued a notice of methodology in the Federal Register for measuring the impact of the HUBZone program. However, the proposed methodology was not well developed. For example, it did not incorporate expert input or a previous study conducted by SBA’s Office of Advocacy. We do not believe that this effort was useful for addressing our recommendation. While SBA continues to agree that evaluating program outcomes is important, to date the agency has not yet committed resources for such an evaluation. In May 2010, we reported that VA had made limited progress in implementing an effective verification program. The 2006 Act requires that VA give priority to veteran-owned and service-disabled veteran-owned small businesses when awarding contracts to small businesses and provides for the use of sole-source and set-aside contracts to achieve contracting goals VA must establish under the Act. The Act also requires VA to maintain a database of veteran-owned and service-disabled veteran- owned small businesses and verify the ownership, control, and veteran or service-disabled status of businesses in the database. The database would be available to other federal agencies. Furthermore, businesses conducting contract work for VA must be listed in the database to receive contracting preferences for veteran-owned and service-disabled veteran- owned small businesses. This verification requirement is unique to VA. For other federal agencies, the service-disabled veteran-owned small business program is a self-certification program and therefore is susceptible to misrepresentation (that is, ineligible firms participating in the program). While the 2006 Act requires VA to use the veteran preferences authorities to award contracts only to verified businesses, VA’s regulation did not require that this take place until January 1, 2012. Since our May 2010 report, Congress passed the Veterans Small Business Verification Act requiring VA to accelerate its time frame for verifying all businesses in its mandated database. VA has set a target date of July 31, 2011, to do so. In fiscal year 2009, 25 percent of the contracts awarded using veteran preference authorities went to verified businesses. At the time of our report, VA had verified about 2,900 businesses––approximately 14 percent of businesses in its database of veteran-owned and service-disabled veteran-owned small businesses. Among the weaknesses we identified in VA’s verification program were files missing required information and explanations of how staff determined that control and ownership requirements had been met. VA’s procedures call for site visits to further investigate the ownership and control of higher-risk businesses, but the agency had a large and growing backlog of businesses awaiting site visits. Furthermore, VA contracting officers awarded contracts to businesses that were denied after the verification process. Finally, although site visit reports indicate a high rate of misrepresentation, VA had not developed guidance for referring cases of misrepresentation for investigation and enforcement action. Such businesses would be subject to debarment under the 2006 Act. To help address the requirement to maintain a database of verified businesses, we recommended that VA develop and implement a plan for a more thorough and effective verification program. More specifically, we recommended that the plan address actions and milestone dates for improving the program, including updating data systems to reduce manual data entry and adding guidance on how to maintain appropriate documentation, on when to request documentation from business owners or third parties, and on how to conduct an assessment that addresses each eligibility requirement. We also recommended that VA conduct timely site visits at businesses identified as higher risk and take actions based on site visit findings, including prompt cancellation of verified status. According to VA officials, they have taken a number of actions to address our recommendations. For example, VA officials told us they had awarded contracts to help expedite the processing of applications, including conducting site visits and reviewing documentation supplied by applicants. As of March 29, 2011, they said that 607 site visits had been conducted and 195 of the applicants visited (32 percent) did not meet the control requirement. Also, VA officials reported a queue of 6,431 active applications pending verification and said they had acquired the capability to process 500 applications per week and expected to have processed about 15,000 applications by July 31, 2011. Furthermore, VA officials told us that as part of their implementation of the requirements of the Veterans Small Business Verification Act, all applicants are now required to submit specified documents establishing their eligibility with respect to ownership and control before a verification decision can be made. VA officials told us they were in the process of testing a new case management system that will reduce the manual input of data, which they plan to implement by June 1, 2011. VA’s development of an effective verification program could provide an important tool for SBA’s oversight of the governmentwide contracting program for service-disabled veteran- owned small businesses. That is, VA’s database could serve as a resource for federal agencies to use when assessing whether a firm is actually service-disabled veteran-owned. We reported in March 2010 that while SBA relies primarily on its annual reviews of 8(a) firms to help ensure the continued eligibility of firms in the program, we observed inconsistencies and weaknesses in annual review procedures related to determinations of continued eligibility. For example, SBA did not consistently notify or graduate 8(a) firms that exceeded industry averages for economic success or graduate firms that exceeded the net worth threshold of $750,000 (see table 2). We noted that the lack of specific criteria in the current regulations and procedures may have contributed to the inconsistencies that we observed and that SBA had taken steps to clarify some, but not all, of these requirements in a proposed rule change. We also reported that SBA’s program offices did not maintain comprehensive data on or have a system in place to track complaints about the eligibility of firms in the 8(a) program. District staff were not aware of the types and frequency of complaints across the agency. As a result, SBA staff lacked information that could be used to help identify issues relating to program integrity and help improve the effectiveness of SBA oversight. Although complaint data are not a primary mechanism to ensure program eligibility, continuous monitoring is a key component in detecting and deterring fraud. We recommended that SBA provide more guidance to help ensure that staff more consistently follow annual review procedures and more fully utilize third-party complaints to identify potentially ineligible firms. According to SBA officials, they have taken some actions to address these recommendations. For example, SBA officials told us that in August 2010 they had provided staff with a new guide for conducting annual reviews of the continuing eligibility of firms in the 8(a) program. Additionally, SBA officials said they were providing training to staff on the recently published revisions to regulations governing the 8(a) program. These revisions provided more clarification on factors that determine economic disadvantage (such as total assets, gross income, retirement accounts) for continuing eligibility in the program. SBA officials also said that they have been incorporating changes into their Web site that will allow third parties to submit complaints about potentially ineligible firms in the 8(a) program. Chair Landrieu, Ranking Member Snowe, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Paige Smith, Assistant Director; Tania Calhoun; Andy Finkel; Janet Fong; Triana McNeil; Harry Medina; Barbara Roesmann; Kathryn Supinski; and Bill Woods. Table 3 lists the 80 economic development programs and provides information about their funding, when available. Using the Catalog of Federal Domestic Assistance and other agency documents, we identified 80 federal programs administered by the four agencies listed below—the Departments of Commerce (Commerce), Housing and Urban Development (HUD), and Agriculture (USDA) and the Small Business Administration (SBA)—that could fund economic development activities. We did not include tax credit programs aimed at economic development in this review.
Economic development programs-- administered efficiently and effectively--can contribute to the well-being of the economy at the least cost to taxpayers. Such programs can encompass small business development and contracting. To encourage such contracting, Congress created programs--such as the Historically Underutilized Business Zone (HUBZone), service-disabled veteran-owned small business, and 8(a) Business Development programs--that give contracting preferences to some types of small businesses: in economically distressed communities; to those owned by service-disabled veterans; and to those with eligible socially and economically disadvantaged owners. This testimony addresses (1) potential duplication in economic development programs and (2) internal controls weaknesses in three small business programs. This testimony is based on related GAO work from 2008 to the present and updates it as noted. GAO examined programs at the Departments of Commerce, Housing and Urban Development, and Agriculture and the Small Business Administration (SBA) to assess program overlap, collaboration, and measures of effectiveness (GAO-11-477R). GAO also reviewed data from SBA and the Department of Veterans Affairs (VA) and conducted site visits. The reports identified opportunities to increase program efficiencies and made recommendations to improve internal controls and develop outcome-oriented measures. Results of GAO's work on 80 economic development programs at the four agencies indicate that the design of each appears to overlap with that of at least one other in terms of the economic development activities they can fund. For example, the agencies administer 54 programs that fund "entrepreneurial efforts," which include business development. SBA has 19 such economic development programs. To address issues arising from potential overlap and fragmentation, GAO relied on previously identified collaborative practices agencies should consider using to maximize performance and results. GAO found that agencies' collaborative efforts were not comprehensive but conducted on a case-by case basis. Further, the agencies generally have not measured outcomes. For instance, SBA has not yet developed outcome measures that directly link to the mission of its HUBZone program. In 2005 and 2008, GAO made recommendations to Commerce and SBA, respectively, aimed at improving the data and methods they rely on to measure the outcomes of some of their economic development programs. Generating key information on outcomes (that measure effectiveness) could help agencies better manage programs. Such information also would enable decision makers to better identify opportunities to realign resources, and if necessary, consolidate or eliminate some programs. As GAO has reported, three small business programs have had varying degrees of internal control weaknesses that affected program oversight. First, in a June 2008 report, GAO determined that SBA's mechanisms for certifying and monitoring firms in the HUBZone program gave limited assurance that only eligible firms participated. For certification and recertification (of initial and continued eligibility), SBA requested documentation or conducted site visits to validate self-reported data in limited instances. In response to GAO's recommendations, SBA has issued guidance requiring supporting documentation upon application and conducted site visits to certified firms. Second, in a May 2010 report, GAO reported that VA has faced challenges in effectively responding to a 2006 statutory mandate to verify the eligibility of small businesses owned by service-disabled or other veterans. Although such businesses self-certify their contracting eligibility, VA (unique among federal agencies) must maintain a database of these firms, verify their status, and only give contracting preferences to verified firms. GAO reported that VA had verified only about 14 percent of firms in its database. Since GAO recommended that VA develop a plan for a more effective verification program, VA stated that it has taken steps to improve its verification process, including awarding contracts to expedite the processing of applications. And finally, in a March 2010 report, GAO found that while SBA conducts annual reviews of 8(a) firms to help ensure continued eligibility, GAO found that key controls needed to be strengthened. GAO's review of a sample of 8(a) firms identified an estimated 55 percent in which SBA staff failed to complete required procedures to assess eligibility criteria. In response to GAO's recommendation that SBA provide more guidance to staff on annual review procedures, SBA stated that it issued a new guide in August 2010.
Crash parts are generally made of sheet metal or plastic and installed on the exterior of a motor vehicle. These parts include hoods, doors, fenders, and trunk lids. Crash parts exclude mechanical parts such as batteries, filters, shock absorbers, and spark plugs. Body shops often use a mix of parts in collision repairs, but we use the term “crash parts” in this report to refer to parts used on the exterior of a vehicle. Aftermarket crash parts are the replacement automotive crash parts that are not made by the original equipment manufacturer (OEM). Many of these aftermarket crash parts manufacturers are located overseas. Recycled airbags are salvaged nondeployed airbags removed from damaged or old vehicles. Crash parts are big business. In 1999, drivers had an estimated 6 million automobile crashes in the United States costing over 40,000 lives and about $8 billion in damage—of which $1.2 billion represents the costs of aftermarket crash parts. Overall, about 60 cents out of every dollar of automobile insurance claims is spent on repairing collision damage to vehicles. Insurance companies estimate that using aftermarket instead of OEM parts saves hundreds of millions of dollars each year. Until the mid- 1980s, consumers and auto body shops could purchase new replacement crash parts only from the original automobile manufacturer. At that time, independent parts manufacturers began offering aftermarket replacement parts at substantially lower prices. Still, the crash parts industry remains highly concentrated, and OEM parts account for about 80 percent of the market. Figure 1 shows the replacement crash parts market by source. Some aftermarket crash parts are certified as to their quality. In 1987, the insurance industry funded the nonprofit Certified Automotive Parts Association (CAPA), whose objective is to ensure the quality of aftermarket crash parts. To determine the quality of these parts, the association examines a manufacturer’s plant, equipment, manufacturing processes, and resulting products. If the association finds the aftermarket crash parts to be equivalent in appearance, fit, material composition, and mechanical properties to new OEM parts, it certifies the parts as functionally equivalent to OEM parts. In addition, it periodically purchases parts in the open market and checks them to ensure they meet the association’s standards. According to the association, in 1999, about 35 percent of all aftermarket crash parts were certified. This represents about 5 percent of the total aftermarket crash parts market—which would include OEM, aftermarket, and recycled parts combined. More recently, in 2000, Global Validators, an automotive quality consultant, started a new certification process directed at improving the quality of aftermarket crash parts. The Manufacturers’ Qualification and Validation Program, similar to the CAPA program, is a set of guidelines that outline policies and quality management practices designed to ensure that aftermarket crash parts are equal in form, fit, function, performance, durability and appearance to OEM parts. This program is based on the QS- 9000 standard, a production quality standard developed in the automotive industry. Consumers can search an on-line database to determine if a specific part has been reviewed under the program. At the federal level, NHTSA is responsible for reducing accidents, deaths, and injuries resulting from motor vehicle crashes. NHTSA accomplishes this, in part, by setting and enforcing safety performance standards that apply to new motor vehicles and motor vehicle equipment. Under these standards, manufacturers of motor vehicles and equipment must assure that their products comply with all applicable safety standards and certify such compliance. The federal standards are written in terms of minimum safety performance requirements for motor vehicles and equipment. Examples of standards include hydraulic brake system requirements to ensure safe braking performance, vehicle lamp requirements to provide adequate illumination, and hood latch requirements to ensure that hoods remain fastened securely. The Motor Vehicle Safety Act requires manufacturers to inform NHTSA when a vehicle or equipment is defective or when a vehicle or equipment does not comply with an applicable motor vehicle safety standard. These requirements also apply to persons who import motor vehicles and equipment into the United States. NHTSA does not approve vehicles or equipment. Instead, federal law establishes a “self-certification” process under which each manufacturer is responsible for certifying that its products meet all applicable safety standards. The law also gives NHTSA the authority to investigate possible safety-related defects, to decide whether a defect exists, and to order a manufacturer to notify consumers and to remedy any defect. NHTSA’s process for identifying a possible defect in motor vehicles and motor vehicle equipment begins with screening the complaints it receives in its Office of Defects Investigation (ODI). Sources of complaints include a toll-free hotline, a Web page, e-mail, telephone calls, and letters. In an average year, ODI receives between 40,000 and 50,000 complaints. These complaints are entered into a complaint database, which ODI analyzes to identify potential defect trends. When the screening identifies a potential problem, ODI opens an investigation called a preliminary evaluation. This evaluation involves notifying the manufacturer and the public and gathering information on the potential defect. If this process continues to indicate that a defect trend may exist, the investigation moves to a second stage called an engineering analysis. In this stage, ODI analyzes the character and scope of the potential defect in more detail. This analysis may include inspections, surveys, tests, and efforts to obtain additional information from the manufacturer. If ODI continues to believe that a defect trend may exist, a panel of experts from the agency may be convened to review the data. If the expert panel concurs with ODI, a recall request letter is sent to the manufacturer. If the manufacturer declines to conduct a recall in response to the letter, NHTSA’s Associate Administrator for Safety Assurance may issue an initial decision that a defect exists and convene a public meeting on the issue. After the meeting, the NHTSA Administrator may issue a final decision and order the manufacturer to conduct a recall. If necessary, the agency will then go to court to enforce such an order. In almost all cases, the manufacturer agrees to conduct the recall without NHTSA’s forcing it to do so. According to NHTSA officials, the agency opens between 80 and 100 defect investigations each year, of which more than half result in recalls. In addition, manufacturers conduct an average of 200 defect recalls each year that are not influenced by NHTSA’s investigations. In 2000, there were over 385 recalls for safety-related defects affecting over 18 million vehicles. States are also involved in the regulation of aftermarket crash parts and recycled airbags. According to the National Association of Independent Insurers, 40 states have enacted some form of legislation governing the use of aftermarket crash parts in vehicle repairs. Most of this legislation is directed at ensuring that vehicle owners are aware that aftermarket parts are being used in repairs. For example, 33 states require that written repair estimates contain a disclosure statement notifying consumers that aftermarket crash parts will be used in the repair, and 8 states require the consent of the consumer to use aftermarket crash parts. Furthermore, according to the Automotive Occupants Restraints Council, New York was the only state that had enacted a law regulating the sale and installation of recycled airbags as of December 2000. Appendix II provides a summary of state law provisions covering aftermarket crash parts and recycled airbags. In addition, in early 2000, the Massachusetts Auto Damage Appraiser Licensing Board conducted two hearings to discuss the safety of OEM, aftermarket, and recycled parts used in collision repair. In September 2000, the Board voted three to two that aftermarket cosmetic parts are not exact duplicates of OEM parts and may jeopardize the safety and value of a vehicle. The debate on the quality and safety of aftermarket crash parts is highly polarized, reflecting a range of opinions on the safety of aftermarket crash parts: Aftermarket crash parts are unsafe. According to this position—held generally by many collision-repair associations and repair shop owners—aftermarket crash parts are inferior to OEM parts in fit and finish and are dangerous. The evidence for this argument is mostly anecdotal, although we saw aftermarket crash parts that were clearly different from their OEM counterparts. Aftermarket crash parts may be unsafe. According to this position—held generally by new vehicle manufacturers—the impact of aftermarket crash parts on occupants’ safety is unknown. Therefore, the manufacturers recommend that only OEM parts be used to ensure that repaired vehicles perform to their original safety specifications. Aftermarket crash parts are safe. According to this position—held generally by insurance companies and aftermarket manufacturers— aftermarket crash parts are cosmetic only and do not affect vehicle safety. The debate on the use of recycled airbags is also divided. General opinions include the following: Recycled airbags may be unsafe. Advocates of this position—generally OEMs, some insurance companies, and body shop owners—maintain that deployed airbags should be replaced only with new OEM airbags. Advocates of this position maintain that airbags are a vital safety feature and the potential risks of recycled airbags should preclude replacing a deployed airbag with anything other than a new airbag. Furthermore, they argue that recycled airbags do not undergo the same intensive quality checks as newly manufactured units. They add that many undetectable variables, like water damage to the airbag, could prevent a recycled airbag from deploying properly. Finally, they contend that the existence of a recycled airbag market will further increase airbag theft. Recycled airbags are safe. Advocates of this position—generally recycling organizations and some insurance companies—maintain that reusing nondeployed OEM airbags is a viable, economical, and safe alternative to using new, more costly OEM airbags when the recycled airbags are properly matched, handled, and installed. The advocates add that lower-income drivers may not be able to afford to replace their airbags with new, more expensive OEM airbags. Therefore, recyclers are creating a market in which drivers can purchase replacement airbags that are 50 percent to 70 percent cheaper than new airbags. We identified seven studies of aftermarket crash parts or recycled airbags, but their results do not conclusively resolve the issue of safety. Five studies—one by consumer advocates, one by an auto manufacturer, and three by the insurance industry—examined the use of aftermarket crash parts. Two studies—one by the recycling industry and the other by an insurance company—focused on the safety of recycled airbags. Although these studies are useful, they do not resolve the debate over the safety of aftermarket crash parts and recycled airbags because they reach different conclusions and are limited in number and scope. In February 1999, Consumer Reports published the results of its study and fueled the debate on the quality of aftermarket crash parts. Consumer Reports compared OEM and aftermarket bumpers and CAPA-certified fenders for a 1993 Honda Accord and a 1993 Ford Taurus. It tested fender corrosion resistance, bumper protection, and the overall quality of the parts’ fit. Consumer Reports found that CAPA-certified aftermarket fenders rusted more quickly and did not always fit properly. The report also stated that aftermarket bumpers did not fit properly and did not provide sufficient protection in low-speed collisions. The aftermarket bumpers tested, which were not CAPA-certified, shattered in a variety of tests at 5 miles per hour or less. One aftermarket bumper did not prevent damage to the Ford headlight mounting panel, radiator support, and air conditioner condenser. Another bumper allowed damage to the Honda radiator, air conditioner condenser, radiator support, and other parts. The report concluded that (1) aftermarket crash parts are inferior to OEM parts, (2) consumers are ill served by the use of aftermarket crash parts, and (3) aftermarket crash parts may influence vehicle safety. Consumer Reports’ study also noted that comprehensively determining the safety of aftermarket crash parts through testing is very difficult, if not impossible. According to Consumer Reports, crash testing—which would ultimately resolve questions about the safety of these parts—is very complex and expensive to conduct for all combinations of replacement crash parts and original vehicles. In 1994, Ford compared its replacement crash parts to certified and noncertified aftermarket crash parts. Ford tested the parts for fit, finish, structural integrity, corrosion resistance, material composition, and dent resistance. According to the study, Ford replacement parts outperformed the aftermarket replacement parts for all quality factors. On the basis of this testing, Ford concluded that aftermarket crash parts are inferior to Ford replacement parts and are not of “like kind and quality.” The Ford testing, like the Consumer Reports testing, focused on the quality, not the safety, of aftermarket crash parts. The Insurance Institute for Highway Safety (IIHS) conducted two studies of aftermarket crash parts. IIHS sought to determine whether aftermarket crash parts pose a safety risk. In its 1987 study, IIHS crashed a 1987 Ford Escort without its front fenders, door skins, and grill and with an aftermarket hood installed. The Escort complied with all front-into-barrier crash test performance requirements specified in federal standards. IIHS concluded that aftermarket crash parts do not affect occupants’ safety during a collision. In February 2000, IIHS released the results of a similar test with a 1997 Toyota Camry and reached the same conclusion. In that test, IIHS compared the results of a crash test of two vehicles—(1) a 1997 Toyota Camry with the front fenders, door skins, and front bumper removed and a CAPA-certified aftermarket hood installed and (2) a factory original 1997 Camry. The study found no significant difference in the performance of the two vehicles, leading IIHS to conclude that crash parts are irrelevant to safety with the possible exception of hoods. IIHS noted two possible safety-related concerns with hoods: (1) a hood latch could fail while driving, allowing the hood to fly up suddenly, obscuring the driver’s view, and (2) a hood may not buckle properly during a crash, allowing it to be driven back near or into the windshield in a collision. In 1995, Thatcham—an insurance industry research facility located in England—conducted a test similar to the 1987 IIHS study. Thatcham crash- tested a 1995 Vauxhall Astra with the front fenders, door skins, and front bumper removed and an aftermarket hood installed. It found that the Astra complied with all front-into-barrier crash test performance requirements specified in federal standards—consistent with IIHS’ findings. The Thatcham study concluded that aftermarket crash parts do not affect the crashworthiness of a vehicle. The Automotive Recyclers Association (ARA) funded a study in 1998 at Garwood Laboratories in California to test 196 recycled airbags and 5 new OEM airbags. The study showed that 195 out of 196 recycled airbags deployed within the manufacturer’s specifications. An association official stated that the laboratory pre-identified one flood-damaged airbag and was not surprised when the airbag did not deploy within the manufacturer’s specifications. Thus, the association concluded that recycled airbags are a viable, economical, and safe alternative to new, more costly OEM airbags when properly handled, shipped, and professionally installed. In 2000, the Insurance Corporation of British Columbia (ICBC) tested 136 recycled airbags from various automobiles. This study sought to determine if there was any appreciable difference in deployment between factory-new OEM airbags and recycled airbags. An official with ICBC stated that the study showed that there is no appreciable difference between OEM and recycled airbags when the airbags are properly replaced and have not been exposed to flood damage. ICBC expects to begin specifying that repairers use recycled airbags in early 2001. An official from ICBC stated that it expects to use only certified recycled airbags in replacing deployed units. We identified two U.S. companies that are developing testing procedures to certify the safety and reliability of recycled airbags. Both organizations use electrical engineering and other methods to detect flood damage, foreign matter, and electronic problems. One of the companies said that it had tested 58 recycled airbags and found that the recycled airbags it tested deployed within the manufacturer’s specifications. These companies said that their approaches could ensure that a recycled airbag performs within the manufacturer’s specifications. Both organizations stated that the key to the safety of recycled airbags is the proper matching, handling and installation of the recycled airbags. One company has begun certifying recycled airbags, and the other plans to start certifying airbags in early 2001. While the studies and tests conducted on aftermarket crash parts and recycled airbags provide useful information, they do not appear sufficient to resolve the question of whether aftermarket crash parts and recycled airbags are safe. The limited number and scope of the studies make it difficult to draw conclusions about all parts. In the studies of aftermarket crash parts, only three vehicles were crash-tested—a 1987 Ford Escort, a 1997 Toyota Camry, and a 1995 Vauxhall Astra. These vehicle models represent only a small percentage of the hundreds of makes, models, and years of vehicles on the roads today. The primary focus of the Consumer Reports study was on the quality of aftermarket crash parts, although it raised questions about their safety. The study also stated that the large number of vehicles and parts available may make it impossible to answer the safety question through testing. Although the two recycled airbag studies conducted by ARA and ICBC showed that undamaged and properly installed airbags will deploy within the manufacturer’s specifications, they did not develop measures to ensure that recycled airbags are undamaged. They highlighted the need to develop testing procedures to ensure that recycled airbags are undamaged and not taken from flood-damaged vehicles. The Motor Vehicle Safety Act gives the Secretary of Transportation broad authority to prescribe safety standards to reduce traffic accidents, deaths, and injuries on the nation’s roads. The act authorizes the Secretary to prescribe safety standards for new motor vehicles and motor vehicle equipment. The Motor Vehicle Safety Act prohibits, in part, the manufacturing, selling, and importing of new vehicles and new vehicle equipment that do not comply with NHTSA’s safety standards. These provisions could apply to both new OEM and new aftermarket crash parts since new parts are classified as new motor vehicle equipment. Although NHTSA has the authority to regulate aftermarket crash parts, the agency has not determined that these parts pose a significant safety concern and therefore has not developed safety standards for them. According to agency officials, the agency has not developed safety standards for aftermarket crash parts because testing by IIHS concluded that the use of aftermarket crash parts does not affect vehicle safety; problems with aftermarket crash parts tend to focus on the fit and finish of the parts, rather than on safety; the agency has not identified any trends in the complaints it receives about the safety of aftermarket crash parts and recycled airbags; and those who voiced concerns about the use of aftermarket crash parts, including manufacturers of original replacement parts, have not provided conclusive evidence that aftermarket crash parts pose a significant safety concern. The act’s provisions that apply to aftermarket parts do not apply to recycled airbags because they are used rather than new equipment. For used vehicles, the Motor Vehicle Safety Act directs the Secretary to prescribe safety performance standards for used motor vehicles, in order to encourage and strengthen state motor vehicle inspection programs. Under this provision, the agency could elect to develop safety standards for occupant restraint systems, which might incorporate airbags. NHTSA has not developed such standards because it has not identified significant problems with occupant restraint systems that could be addressed by state motor vehicle inspection programs. The agency has, however, determined that water damage can undermine the performance of airbag systems. Through its defect investigation process, NHTSA has identified several safety defects in motor vehicles that were related to the failure of the airbags to operate properly after being exposed to flood damage or the intrusion of other liquids. The resulting recalls affected over 725,000 vehicles. Several other manufacturers have recalled vehicles to address similar problems without being influenced by NHTSA’s investigations. According to NHTSA officials, the agency could conduct a study of recycled airbags and, if appropriate, issue consumer warnings or issue a report to the Congress on its findings. NHTSA has the authority to order manufacturers of replacement parts that contain a safety-related defect to recall the defective items. Manufacturers must notify owners, purchasers, and dealers of the defect and remedy the defect (either through repair or replacement) free of charge. However, NHTSA’s ability to detect parts with safety-related defects is limited because the agency’s database of complaints from vehicle owners and others contains only a fraction of the complaints that manufacturers receive. Moreover, even if NHTSA were to identify unsafe aftermarket crash parts, it would likely have difficulty having them recalled. Recent legislation creates opportunities for NHTSA to gather additional information needed for identifying possible defects and improve its management and analysis of vehicle safety data. An essential component of NHTSA’s overall process is the agency’s ability to detect safety-related defects. To decide whether to investigate a possible safety-related defect, including any relating to OEM and aftermarket crash parts, NHTSA relies heavily on its complaint database. However, this database contains only a fraction of the complaints that customers report to manufacturers. In addition, aftermarket crash parts may not be identified as such in the database because consumers who complain to NHTSA may not know they have aftermarket crash parts or their complaints may not indicate that such parts are involved. NHTSA’s ODI receives consumer complaints about possible defects in motor vehicles and motor vehicle equipment from a toll-free consumer hotline, an on-line computer Web page, e-mail, telephone calls, surveys, and letters. As of August 2000, the database contained about 400,000 complaints gathered over the last 10 years. In an average year, ODI receives between 40,000 and 50,000 complaints. The number of complaints in the database may represent only a small percentage of all complaints being made about possible defects. For example, in September 2000, the Administrator of NHTSA testified on the investigation and recall of Firestone tires. The Administrator said that by the end of 1999, NHTSA had received 46 reports of incidents involving these tires. NHTSA did not open a defect investigation at that time because of the large number of tires in use and the variety of possible causes of tire failure. However, after press reports in February 2000 highlighted two fatalities and alluded to a number of other crashes and fatalities, NHTSA opened an investigation. After obtaining additional information from the manufacturers involved and the attendant publicity, the Administrator reported that as of August 31, 2000, NHTSA had received over 1,400 complaints. In addition, according to the former Chief of ODI’s Trends and Analysis Division, the complaints NHTSA receives about safety-related defects may represent only 10 percent of all the complaints that manufacturers receive. This estimate was based on the results of past requests for information made to manufacturers after ODI had opened investigations. For example, in February 2000, ODI began an investigation of plastic door garnish moldings on 1998 and 1999 Sebring Coupe vehicles. This investigation responds to 21 consumer complaints of partial and complete detachment, some of which occurred while the consumer was driving. During the preliminary evaluation phase of the investigation, ODI requested information from DaimlerChrysler Corporation and obtained 276 additional complaints that the manufacturer had received. According to NHTSA officials, the agency has made efforts over the past few years to encourage repair shops and others to report safety-related problems with either OEM or aftermarket crash parts; however, the agency has received relatively few complaints about these parts. Aftermarket crash parts may not be identified as such in NHTSA’s database because consumers who complain to NHTSA may not know they have aftermarket crash parts or their complaints may not indicate that such parts are involved. According to data supplied by the National Association of Independent Insurers, 10 states do not have any form of legislation addressing the use of aftermarket crash parts. In these states, it is not necessary to tell an owner specifically about the use of an aftermarket part in a vehicle repair or to receive the owner’s consent to use the parts. Furthermore, there are no requirements for informing the purchaser of a used vehicle that aftermarket crash parts have been used in an earlier repair. In these instances, the complainant would be unlikely to identify the defective part as an aftermarket part. In addition, in submitting a complaint to NHTSA, a complainant is free to describe the problem in any way he or she chooses. The choice of words in a complaint is important because the process NHTSA follows in identifying potential defect trends begins with a search of key words in the database. For example, we asked NHTSA to search for “aftermarket” and found six complaints that contained that term. However, complainants could have used a variety of other words to describe their complaint or might not have thought to mention the term. Even if NHTSA were to conclude that certain aftermarket crash parts contained a safety-related defect, its ability to recall them would be hampered because the parts do not always indicate the manufacturer and it may be difficult to identify the vehicles on which the parts were used. According to Consumer Reports, many aftermarket crash parts are essentially invisible to NHTSA’s complaint and recall system, mainly because the parts have no manufacturer’s name stamped on them. During our review, we also saw several aftermarket crash parts that did not carry the manufacturer’s identification. However, the extent to which parts are unlabeled is unknown. Taiwan Auto Body Parts Association officials stated that, since 1994, nearly all of the aftermarket crash parts its members manufacture are stamped with the manufacturer’s name and a production lot number. Furthermore, according to a CAPA official, the aftermarket parts certification process requires manufacturers to mark each part with the manufacturer’s name and production lot number to facilitate identification and recall if necessary. However, CAPA recognizes that its certified parts represent only a third of all aftermarket crash parts and some noncertified parts do not indicate the manufacturer. Even if the manufacturers of aftermarket parts were clearly identified, little information exists on the purchasers of those parts, making the recall process difficult. When automotive manufacturers recall vehicles, they rely on information they obtained when the vehicles were purchased and on registration records maintained by state departments of motor vehicles to identify and locate vehicle owners. With aftermarket crash parts, however, this information is typically not available. Vehicle owners may purchase aftermarket crash parts at automotive retail stores and install the parts themselves, or body shops may install aftermarket parts that they obtained through parts distributors. In either instance, it is unlikely that the owners of vehicles with unsafe aftermarket crash parts could be specifically identified because it is unlikely that shops or distributors would maintain the information needed to locate the owners of the unsafe parts. Consequently, it would be necessary to recall unsafe aftermarket crash parts using a broad-based approach similar to a consumer product safety recall. Under this approach, public announcements are made to alert consumers to the product’s safety-related defect. NHTSA officials recognize that it would be very difficult to identify and recall aftermarket crash parts using this approach. The Firestone tire recall, together with the subsequent congressional investigations and legislative initiatives, focused attention on weaknesses in NHTSA’s regulatory and enforcement program. Likewise, congressional oversight reports expressed concerns about the effectiveness and efficiency of NHTSA’s process of gathering and analyzing data on vehicle defects and initiating investigations and recalls. The Transportation Recall Enhancement, Accountability, and Documentation Act was signed into law in November 2000. In addition to requirements specifically addressing tires, the act sought to increase NHTSA’s legal authority, improve its regulatory programs and access to safety information, and increase its funding levels by $9.1 million. For example, the act requires manufacturers to report to NHTSA safety recalls of their products (which would include OEM and aftermarket crash parts) in other countries, increases civil penalties, and establishes criminal penalties for persons who knowingly violate the act. The act also requires NHTSA to conduct a comprehensive review of all standards, criteria, procedures, and methods, including the data management and analysis systems it uses to open a defect or noncompliance investigation. The validity of concerns about the use of aftermarket crash parts and recycled air bags has been debated for many years. As a result, a number of states have enacted legislation to ensure that vehicle owners are aware that aftermarket crash parts are being used in repairs. Existing studies on the safety of aftermarket crash parts and recycled airbags show mixed results, are limited in number and scope, and fail to resolve the debate. Although NHTSA has the authority to regulate aftermarket crash parts, the agency has not developed safety standards for them because it has not determined that any aftermarket crash parts contain safety-related defects. NHTSA has more limited authority to regulate the use of recycled airbags. NHTSA could elect to develop safety standards for occupant restraint systems under the used vehicle provisions of the Motor Vehicle Safety Act. These standards could apply to systems containing recycled airbags, but the standards would apply to the restraint system as a whole and not to its individual components. NHTSA has not developed such standards because it has not identified significant problems with occupant restraint systems that could be addressed by state motor vehicle inspection programs. Absent a comprehensive study that resolves the issue of safety, NHTSA is left to rely on its complaint system to identify possible safety-related defects in aftermarket crash parts and recycled airbag systems. However, NHTSA’s defect identification and recall system has limitations. The key database used to identify unsafe parts contains only a small fraction of the complaints received by manufacturers. Apparently, many vehicle owners are either unaware of NHTSA’s complaint program or choose not to participate in it. In addition, aftermarket crash parts may not be identified as such in the database because consumers who complain to NHTSA may not know they have aftermarket crash parts or their complaints may not indicate that aftermarket parts are involved. These limitations may hamper NHTSA’s ability to detect safety-related trends through broad key-word searches of its complaint database and make it unlikely that NHTSA can identify all unsafe parts. In addition, the ability to recall unsafe aftermarket crash parts is limited because some parts are not stamped with the manufacturer’s name and there is no trail leading from the manufacturer to the ultimate user of the part. Therefore, even if an aftermarket part were found to contain a safety-related defect, the product might have to be recalled using a broad-based announcement similar to a consumer product safety recall. The two studies on the safety of recycled airbags that we identified concluded that they can be a potentially safe, economical alternative to new airbags as long as they are undamaged and properly handled and installed. However, the failure of some flood-damaged air bags to deploy correctly also demonstrates the potential for serious safety consequences. Resolving the safety issues associated with using recycled airbags is important because it appears likely that their use will grow, especially if the Insurance Corporation of British Columbia begins specifying their use in early 2001. The recently enacted Transportation Recall Enhancement, Accountability, and Documentation Act gives NHTSA an opportunity to improve its systems for detecting and recalling defective products. It provides NHTSA with the authority to require additional data from manufacturers and others that it can consider in determining the need to initiate an investigation. In addition, the act’s provisions requiring a comprehensive review of all standards, criteria, procedures, and methods used to open a defect or noncompliance investigation give NHTSA an opportunity to improve its processes for identifying potentially unsafe parts. The Secretary of Transportation should direct the Administrator of the National Highway Traffic Safety Administration, as part of the legislatively required review, to consider taking the following actions: Identify additional sources of information to include in the agency’s complaint database. This might include obtaining additional data from manufacturers and insurance companies. Heighten consumers’ awareness of NHTSA’s complaint reporting system with the goal of increasing consumers’ participation. Investigate the safety of using recycled airbag systems, particularly those taken from flood-damaged vehicles, and determine if any action is appropriate concerning their use. We provided copies of a draft of this report to the Department of Transportation for its review and comment. We discussed the report with NHTSA officials, including the Associate Administrator for Safety Assurance, the acting Chief Counsel, and the Director of the Office of Defects Investigation. They emphasized that NHTSA has statutory authority to issue standards only if they would meet the need for motor vehicle safety and to seek recalls only if there is evidence that particular products made by a specific manufacturer contain a safety-related defect. They added that NHTSA has not taken action to regulate aftermarket crash parts because studies conducted to date and other data and analyses do not demonstrate that there are safety-related problems with the parts. They also maintained that NHTSA does not have statutory authority to regulate recycled airbags. They indicated that their authority over used vehicles is limited to prescribing standards applicable to used motor vehicles for the purpose of encouraging and strengthening state inspections of those vehicles. As a result, NHTSA can issue performance-based standards for used vehicle inspections, but cannot differentiate between new or used individual parts or the history of those parts. We revised this report to reflect NHTSA’s comments on its authority over recycled airbags. NHTSA also provided other technical clarifications and information, which we incorporated in the report as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Honorable Norman Y. Mineta, Secretary of Transportation and the Honorable Robert Shelton, Acting Administrator of the National Highway Traffic Safety Administration. We will also make copies available to others on request. If you have any questions about the report, please contact me at (202) 512- 2834. Key contributors to this report were Samer Abbas, Bert Japikse, David Lehrer, John Rose, and Glen Trochelman. To determine whether any studies have been conducted on the safety of aftermarket crash parts and recycled airbags, we conducted a literature search using the Internet, periodicals, trade journals, and Lexis/Nexis. To identify additional studies, we interviewed federal, state, and industry experts. At the federal level, we interviewed officials from the National Highway Traffic Safety Administration's (NHTSA) Office of Defects Investigation, Office of Regulatory Analysis and Evaluation, Office of Vehicle Safety Compliance, and Office of Vehicle Safety Research. At the state level, we interviewed officials from New York and Ohio. To gain an industry perspective, we interviewed representatives from organizations representing manufacturers and distributors of aftermarket and original equipment manufacturers' parts, collision repair shops and collision repair specialists, consumer advocacy groups, insurance providers, and vehicle safety experts. (A complete listing of the organizations we contacted appears at the end of this appendix.) In addition, we met with representatives of eight collision repair shops located in Illinois and Massachusetts to obtain their views on the safety and quality of aftermarket crash parts and recycled airbags. Illinois was selected because it was the site of the State Farm case and Massachusetts because the Massachusetts Auto Damage Appraisers Licensing Board recently conducted two hearings to discuss the safety of original, aftermarket, and recycled parts used in collision repair. To determine the extent of NHTSA's authority over aftermarket crash parts and recycled airbags, we reviewed applicable legislation, regulations, program guidance, and other documentation on NHTSA's vehicle safety process and procedures. We also interviewed officials in NHTSA's Office of Defects Investigation, Office of Regulatory Analysis and Evaluation, Office of Vehicle Safety Compliance, Office of Vehicle Safety Research, and Office of General Counsel to gain an understanding of NHTSA's rules, regulations, policies, and procedures. To determine NHTSA's ability to identify and remove unsafe aftermarket crash parts and recycled airbags from the nation's roadways, we reviewed NHTSA's policies and procedures for identifying safety-related defects. We reviewed consumer complaints on aftermarket crash parts contained in NHTSA's complaint database and reviewed the data and reports on the complaints. We also gathered information on the actions NHTSA has taken with respect to the safety of aftermarket crash parts. To identify potential ways to improve the effectiveness of NHTSA's safety program, we interviewed NHTSA officials, industry associations, and consumer advocacy groups. We did not analyze the accuracy or quality of the over 400,000 complaints contained in NHTSA's database because such an analysis was beyond the scope of our review. We performed our review from June 2000 through January 2001 in accordance with generally accepted government auditing standards. Aeromotive Automotive Electrical Engineering Field Services Airbag Testing Technology, Inc. Alliance of American Insurers Alliance of Automotive Manufacturers American Insurance Association Auto Body Parts Association Automotive Aftermarket Industry Association Automotive Occupant Restraints Council Automotive Engine Rebuilders Association Automotive Parts Rebuilders Association Automotive Recyclers Association Automotive Service Association California Autobody Association Center for Auto Safety Certified Automotive Parts Association Coalition for Auto Repair Equality Consumer's Union (Consumer Reports) DaimlerChrysler Corporation Detroit Testing Laboratories Eagle Automotive, Inc. Entela Laboratories Ford Motor Company General Motors Corporation Insurance Corporation of British Columbia Insurance Institute for Highway Safety Keystone Automotive Industries, Inc. Massachusetts Auto Body Association Massachusetts Auto Damage Appraisers Licensing Board Mitsubishi Motors America, Inc. National Association of Independent Insurers National Association of Mutual Insurance Companies Nationwide Insurance companies New York State Department of Motor Vehicles Nissan North America, Inc. North Star Automotive Group Ohio Board of Motor Vehicle Collision Repair Registration Specialty Equipment Manufacturers Association Society of Collision Repair Specialists Taiwan Auto Body Parts Association Tech-Cor, Inc. Toyota Motor Sales, U.S.A., Inc. USAA Property and Casualty Insurance Volkswagen of America, Inc. Forty states have enacted some form of legislation governing the use of aftermarket crash parts in vehicle repairs, according to data supplied by National Association of Independent Insurers. According to the association’s data, of the 40 states with existing legislation, 90 percent (36 states) require that repair estimates identify each aftermarket crash part used in the repair, and about 83 percent (33 states) require that the repair estimate disclose that aftermarket crash parts are being used in the repair. A manufacturer’s warranty is required by 68 percent (27 states), and about 58 percent (23 states) require a manufacturer’s identification on any aftermarket crash parts used. The provisions that the states have enacted vary but can be grouped in nine categories. Figure 1 summarizes the states’ aftermarket crash parts legislative provisions. According to an Automotive Occupant Restraints Council official, only New York had laws governing the sale and installation of recycled airbags. New York requires that each recycled airbag be certified according to standards established by an approved, nationally recognized testing, engineering, and research body. On May 2, 2000, the New York Supreme Court for Albany County granted a preliminary injunction concerning the requirement that all recycled airbags be certified before installation. The judge determined that, since there was no existing way to certify recycled airbags, it was impossible to abide by the law. The New York State Department of Motor Vehicles has since begun reviewing one company’s recycled airbag certification procedures to determine whether the procedures address the concerns of the court. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Although the National Highway Traffic Safety Administration (NHTSA) has the authority to regulate aftermarket crash parts, the agency has not developed safety standards for them because it has not determined that any aftermarket crash parts contain safety-related defects. NHTSA has more limited authority to regulate the use of recycled air bags. NHTSA could elect to develop safety standards for occupant restraint systems under the used vehicle provision of the Motor Vehicle Safety Act. NHTSA has not developed such standards because it has not identified significant problems with occupant restraint systems that could be addressed by state motor vehicle inspection programs. The limitations of NHTSA's complaint system may hamper NHTSA's ability to detect safety-related trends through broad key-word searches of its complaint database and make it unlikely that NHTSA can identify all unsafe parts. In addition, the ability to recall unsafe aftermarket crash parts is limited because some parts are not stamped with the manufacturer's name and there is no trail leading from the manufacturer to the ultimate user of the part. Two studies on the safety of recycled airbags concluded that they can be a potentially safe, economical alternative to new airbags as long as they are undamaged and properly handled and installed. However, the failure of some flood-damaged airbags to deploy correctly also demonstrates the potential for serious safety consequences.
The Federal Reserve System is involved in many facets of wholesale and retail payment systems in the United States, including providing wire transfers of funds and securities; providing for the net settlement of check clearing arrangements, automated clearinghouse (ACH) networks, and other types of payment systems; clearing checks and ACH payments; and regulating certain financial institutions and overseeing certain payment systems. Responding in part to a breakdown of the check-collection system in the early 1900s, Congress established the Federal Reserve System as an active participant in the payment system in 1913. The Federal Reserve Act directs the Federal Reserve System to provide currency in the quantities demanded by the public and authorizes the Federal Reserve System to establish a nationwide check clearing system, which has resulted in the Federal Reserve System’s becoming a major provider of check clearing services. Congress modified the Federal Reserve System’s role in the payment system through the Monetary Control Act of 1980 (MCA). One purpose of the MCA is to promote an efficient nationwide payment system by encouraging competition between the Federal Reserve System and private- sector providers of payment services. The MCA requires the Federal Reserve System to charge fees for its payment services, which are to be set to recover, over the long run, all direct and indirect costs of providing the services. Before the MCA, the Federal Reserve System provided payment services to its member banks for no explicit charge. The MCA expanded access to Federal Reserve System services, allowing the Federal Reserve System to offer services to all depository institutions, not just member banks. Congress again expanded the role of the Federal Reserve in the payment system in 1987 when it enacted the Expedited Funds Availability Act.This act expanded the Federal Reserve Board’s authority to regulate certain aspects of check payments that are not processed by the Federal Reserve System. Through specific regulatory authority and its general authority as the central bank, the Federal Reserve plays an important role in the oversight of the nation’s payment systems. The Federal Reserve Board has outlined its policy regarding the oversight of private-sector clearance and settlement systems in its Policy Statement on Payment Systems Risk. The second part of this policy incorporates risk management principles for such systems. The Federal Reserve System competes with the private sector in providing wholesale payment services. Wholesale payment systems are designed to clear and settle time-critical and predominantly large-value payments.The two major wholesale payment systems in the United States are the Fedwire funds transfer system, owned and operated by the Federal Reserve System, and the Clearing House Interbank Payments System (CHIPS), which is owned and operated by the Clearing House Service Company LLC, a subsidiary of the New York Clearing House Association LLC (NYCHA) for use by the participant owners of the Clearing House Interbank Payments Company LLC (CHIPCo). Fedwire is a real-time gross settlement (RTGS) system through which transactions are cleared and settled individually on a continuous basis throughout the day. CHIPS began operations in 1970 as a replacement for paper-based payments clearing arrangements. Since January 22, 2001, CHIPS has operated as a real-time settlement system. Payment orders sent over CHIPS are either simultaneously debited/credited to participants’ available balances or have been netted and set off with other payment orders and the resulting balance is debited/credited against participants’ available balances throughout the day. The transfer of balances into CHIPS and payments out occur via Fedwire. The Federal Reserve System oversees CHIPS’ compliance with its Policy Statement on Payment Systems Risk. The size and aggregate levels of wholesale transactions necessitate timely and reliable settlement to avoid the risk that settlement failures would pose to the financial system. Although wholesale payments constitute less than 0.1 percent of the total number of transactions of noncash payments, they represent 80 percent of the total value of these payments. Moreover, in 1999, the value of payment flows through the two major wholesale systems in the United States, Fedwire and CHIPS, was approximately 69 times the U.S. gross domestic product in that year. The Federal Reserve System also competes with the private sector in providing retail payment services. For example, the Federal Reserve System provides ACH and check clearing services. ACH systems are an important mechanism for high-volume, moderate to low-value, recurring payments, such as direct deposit of payrolls; automatic payment of utility, mortgage, or other bills; and other business- and government-related payments. The Federal Reserve System also competes with private-sector providers of check clearing services. To do this, the Federal Reserve operates a nationwide check clearing service with 45 check processing sites located across the United States. The Federal Reserve System’s market share of payment services as of year- end 1999 is represented in table 1. During forums held by the Federal Reserve System’s Committee on the Federal Reserve System in the Payments Mechanism, held in May and June, 1997, committee members and Federal Reserve staff met with representatives from over 450 payment system participants, including banks of all sizes, clearing houses and third-party service providers, consumers, retailers, and academics. Although a few large banks and clearing houses thought the Federal Reserve System should exit the check collection and ACH businesses, the overwhelming majority of forum participants opposed Federal Reserve System withdrawal. Participants were concerned that the Federal Reserve System’s exit could cause disruptions in the payment system. The Core Principles illustrates how the central banks see their roles in pursuing their objective of smoothly functioning payment systems.Further, the Core Principles outlines central banks’ roles in promoting the safety and efficiency of systemically important payment systems that they or others operate. The laws of the countries we studied support this aspect of the Core Principles. These countries charge their central banks with broad responsibility for ensuring the smooth operation and stability of payments systems. In their basic role as banks, central banks generally are charged with acting as a correspondent bank for other institutions, providing accounts, and carrying out interbank settlements. Nonetheless, countries’ laws vary regarding the specific roles a central bank should play in the payment system. Central banks in the G-10 countries and Australia have endorsed the Core Principles, which sets forth 10 basic principles that should guide the design and operation of systemically important payment systems in all countries as well as four responsibilities of the central bank in applying the Core Principles. (The principles and responsibilities are presented in app. II.) The overarching public policy objectives for the Core Principles are safety and efficiency in systemically important payment systems. Although the Core Principles generally is considered to apply to wholesale payment systems, some payments industry officials said that some payment systems that process retail payments could reasonably be considered systemically important because of the cumulative size and volume of the payments they handle. Providing for the safety of payment systems is mostly a matter of mitigating the risks inherent to the systems. These risks are listed and defined in table 2. Core Principle IV seeks to mitigate settlement risk by endorsing prompt final settlement, preferably during the day but, minimally, at the end of the day. The two major types of wholesale payment settlement systems are RTGS and multilateral netting systems. Recently, several hybrid systems have also been developed. (These two major types of systems are described further in app. III.) In general, multilateral netting systems offer greater liquidity because gross receipts and deliveries are netted to a single position at the end of the day. An institution can make payments during the day as long as its receipts cover the payments by the end of the day. However, multilateral netting systems without proper risk controls can lead to significant systemic risk. Because transactions are processed throughout the day, but not settled until the end of the day, the inability of a member to settle a net debit position could have large unexpected liquidity effects on other system participants or the economy more broadly. RTGS systems rely on immediate and final settlement of transactions, and these systems have much less exposure to systemic risk that could result from a settlement failure. Without a system for the provision of adequate intraday credit, these systems cause potential liquidity constraints because they require that funds or credit be available at the time that a payer initiates a transaction. Efficiency in payment systems can be characterized as both operational and economic. Operational efficiency involves providing a required level and quality of payment services for minimum cost. Cost reductions beyond a certain point may result in slower, lower quality service. This creates trade-offs among speed, risk, and cost. Going beyond operational efficiency, economic efficiency refers to (1) pricing that, in the long run, covers all of the costs incurred and (2) charging those prices in a way that does not inappropriately influence the choice of a method of payment. The Core Principles sets forth four responsibilities of the central bank in applying the core principles, two of which address oversight functions. The first is that the central bank should ensure that the systemically important systems it operates comply with the Core Principles, and the second is that the central bank should oversee compliance with the Core Principles by systems it does not operate, and it should have the ability to carry out this oversight. Therefore, the Core Principles affirms the importance of central banks’ oversight responsibility for their countries’ systemically important payment systems, including those that they do not own or operate. The laws of most of the countries we studied give the central bank broad responsibility for ensuring that payment systems operate smoothly. In addition, in their basic role as banks, central banks are generally charged with providing accounts to certain financial institutions and effecting interbank settlement. While some countries are specifically charged with providing additional payment services or regulating private payment systems, others are not. Similarly, regulatory and oversight authority is not always specified in laws but is obtained through historical development and the broader mission of the central bank. The European Central Bank (ECB) is the central bank for the countries that have adopted the euro. In conjunction with the euro area countries’ national central banks, the ECB oversees payment systems for the euro area and operates the Trans-European Automated Real-time Gross settlement Express Transfer (TARGET) system, the primary payment system for euro payments.The ECB’s powers and responsibilities are similar to those of national central banks. We therefore analyzed the ECB along with countries’ national central banks. In developing TARGET, the ECB set out strict rules regarding the national central banks’ provision of payment services, requiring each central bank to provide a RTGS system, which serves as a local component of TARGET. The laws of Canada, France, Japan, and the United Kingdom cast the central bank as a monitoring entity having general powers to ensure that payment systems do not pose systemic risk. The central banks in those countries are not specifically charged with providing particular payment clearing services. However, as a matter of practice, the central bank in France, which plans to discontinue its check clearing service in 2002, will continue to operate services related to check fraud. Although Australia’s law recognizes a limited role for the Reserve Bank of Australia to act as a service provider, the Reserve Bank of Australia’s primary purpose regarding payments systems is to serve as an oversight and regulatory mechanism designed to control risk to promote the overall efficiency of Australia’s financial system. German law authorizes the Bundesbank to furnish payments services, and the Bundesbank performs retail payment functions, including check processing, credit transfers, checks, and direct debits as well as owning and operating RTGSplus, which is an RTGS hybrid system for wholesale payments. The central banks we studied have general authority to take actions to protect against systemic risk. In some cases, the banks are to serve a particular regulatory function. For example, under Canadian law, the central bank decides upon the qualifications of payment systems determined by the central bank to pose systemic risk. However, except for Germany, Australia, and the United States, the laws of the countries we reviewed generally do not contemplate that the central bank is to regulate the provision of payment services for purposes unrelated to systemic risk. All of the central banks we studied provide settlement for wholesale payment systems. Moreover, these central banks participated in the design and development of, and have oversight over, wholesale payment systems. Most central banks play a role in providing these wholesale payment services. However, as demonstrated by the central banks we studied, central bank involvement in wholesale payment systems varies. Some central banks have full ownership and operational involvement in the payment system; others have little operational involvement beyond settlement services. Other central banks participate in partnerships. In some cases, the central bank is a major provider or perhaps the only provider of wholesale payment services. The Federal Reserve System, as previously noted, is a major provider of wholesale payment services. Each of the central banks we reviewed has participated in the design and development of its country’s wholesale payment system. For example, the Bundesbank collaborated in developing the RTGSplus system. The Bank of France played a major role in the development of France’s systems. The Bank of England cooperated with the Clearing House Automated Payment System (CHAPS) in the development of a new system, NewCHAPS; the Bank of Canada assisted in the design and development of the Large Value Transfer System. In the G-10 countries, the first automated RTGS system was Fedwire in the United States, which is owned and operated by the Federal Reserve System. Although there are some net settlement systems for wholesale payments today, many countries are transitioning to RTGS systems. In Europe, various decisions over the past 5 to 10 years have encouraged current and potential euro area countries to develop national RTGS systems. The trend toward RTGS systems extends beyond Europe’s boundaries, as countries worldwide are adopting RTGS systems. Central banks we studied played various roles in providing and overseeing wholesale payment services. All central banks provide key settlement services for wholesale payment systems. Some central banks own and operate wholesale payment systems that include clearance and settlement while others only provide oversight and settlement, leaving clearance and other processing activities to other parties. There is no clear pattern in the roles played by central banks in clearing wholesale payments. In addition to the United States, two of the central banks we studied, the Bundesbank and the Bank of France, have full ownership of their respective wholesale payment systems. The Bundesbank owns and operates the RTGSplus system, which was developed with the input of the German banking industry. The Bundesbank has full control over the practices of the system for large-value payments. The Bank of France owns and manages Transferts Banque de France, which is a RTGS system that is one of the two wholesale payment systems in France. The Bank of France is also a joint owner of the company that owns and operates France’s other wholesale payment system, which is a hybrid, real-time net settlement system. Although the Bank of France is only a partial owner of this system, it can exert considerable influence over it by virtue of its ownership role in the controlling company. The Bank of England is a member and shareholder of CHAPS Inc., which operates England’s sterling and euro RTGS systems. Although the Bank of England does not own or manage any payment clearing system, CHAPS payments settle by transferring funds among participating institutions’ Bank of England accounts. The Bank of England is the settlement bank for both the CHAPS Sterling and CHAPS Euro. The Bank of Canada has a more limited operational role in its system. The Bank of Canada entrusts the ownership and operation of the Large Value Transfer System (LVTS) to the Canadian Payments Association, which the Bank of Canada chairs. The Bank of Canada expressly guarantees settlement of LVTS in the event of the simultaneous default of more than one participant, and losses exceed available participant collateral. This guarantee is likened to “catastrophic insurance with a very large deductible,” with the latter being the collateral provided by the participants. Although the extent of central bank oversight over retail payment operations varies, central banks generally consider retail payments as an important component of the payment system. As such, central banks have some responsibility for promoting well-functioning retail payment systems. The operational role of the central bank in retail payment clearing varies considerably among the countries we studied. The basic structure of retail payment systems depends largely on the structure of the underlying financial system and on the historical evolution of payment processes. Factors that influence central bank involvement in retail payment systems include the history and structure of the country’s payment system and banking industry. While we identified several factors that influenced the involvement of a central bank in its country’s retail payment system, these factors interact uniquely and occur to varying degrees in the systems we studied. Retail payments are generally lower in value and urgency from the perspective of the financial system than wholesale payments, but retail payments occur more frequently. They typically include consumer and commercial payments for goods and services. Noncash retail payment instruments are generally categorized as paper-based (most commonly checks) or electronic (most commonly credit cards, credit transfer, debit cards, and direct debits). These payment instruments are further described in table 3. Central banks provide settlement for retail payments, but commercial banks also settle retail payments. Where the central bank provides settlement, it does so for “direct participants”—that is, institutions having settlement accounts at the central bank. Settlement of payments at the central bank sometimes requires tiering arrangements. Under these arrangements, “direct participants” settle payments through their accounts at the central bank, with indirect participants’ settling accounts with a direct participant with whom they have a settlement arrangement. Such is the case with the Bank of England, which acts as a banker to the settlement banks that are direct members of the United Kingdom’s primary payment clearing association. Settlement of retail payments may also occur through settlement agents, third-party arrangements, or correspondent accounts that institutions hold with each other for bilateral settlement. Although many central banks work to ensure that their retail payment systems are well-functioning, their approaches diverge. Some central banks play a prominent regulatory and operational role in retail payments and see these roles as keys to fostering well-functioning retail systems, while others assume more limited roles. Whatever the level of involvement in oversight or operations, most central banks consider retail payments as important components of the payment system and therefore assume some responsibility in promoting well-functioning retail payment systems. A number of structural factors influence the central bank’s role in retail payments. For example, the involvement of the central bank in check clearing can vary. In countries with a concentrated banking industry, on-us check clearing will occur with higher frequency. On-us checks are checks that are deposited at the same bank on which they are drawn, so that no third party, including the central bank, is required for clearing or settlement. For example, Canada has few banks, heavy check use, and little central bank involvement in clearing retail payments. On the other hand, the United States has a large number of banks and its central bank is heavily involved in providing check clearing services. If a country has many smaller banks, such as savings, rural, and cooperative banks, there will be more need for some kind of retail clearance system, thereby creating greater potential need for central bank involvement. Identifying the extent to which payment preferences influence central bank involvement in clearing payments is difficult. Some have suggested that central banks in countries that rely heavily on paper-based instruments are more involved in clearing retail payments, and that central banks of countries that are more reliant on electronic payments provide fewer clearing services. Central banks involved in check clearing include those in Germany, France, and the United States. France and the United States rely heavily on checks for retail payments. In contrast, the Bundesbank is heavily involved in clearing a variety of retail payment instruments, but Germany is not particularly reliant on checks as a means of payment. The physical size of a country determines the distances that payment instructions might have to travel between the paying and the drawing banks. This has particular relevance in countries that rely heavily on paper- based instruments such as checks, which might have to be physically moved great distances to be processed. For example, this is the case in the United States, which is much larger than any European country. The United States currently has approximately 19,000 depository institutions. Canada, on the other hand, has far fewer financial institutions but is also physically large and uses checks extensively. Private- sector correspondent banks clear many checks and compete with the central bank. The central bank, however, is perceived as a reliable and neutral intermediary to clear payments and provide settlement on a large scale for a diverse set of institutions. Table 4 shows the relative importance of noncash payment instruments in selected countries. A central bank’s role in the retail payment system reflects historical events and developments that have shaped retail payment systems in a particular country over many years. For example, the GIRO system serves as a primary retail payment in many European countries. The GIRO system was originally developed by the European Postal agencies, rather than by banks. Historically, European banking systems were largely decentralized and in most cases highly regulated. Therefore, in the absence of an efficient payment system for retail payments developed by the banking industry, payers in most European countries turned to national institutions, such as the postal service, which offered credit transfers (so-called GIRO payments) through a nationwide network of branches. Commercial banks subsequently began to offer GIRO services. As a result of these events, many European countries have well-developed systems that do not rely on central bank clearing for credit transfers. These systems were originally established by the public sector to respond to needs that were not being met by the private sector. Similarly, as previously noted, the Federal Reserve System was established to respond to events that pointed to the lack of a private remedy to market problems. We received comments on a draft of this report from the Board of Governors of the Federal Reserve System. These comments are reprinted in appendix IV. Board staff also provided technical comments and corrections that we incorporated as appropriate. We are sending copies of this report to the chairman of the House Subcommittee on Domestic Monetary Policy, Technology, and Economic Growth; the chairman of the Board of Governors of the Federal Reserve System; the president of the Federal Reserve Bank of Atlanta, and the president of the Federal Reserve Bank of New York. We will make copies available to others on request. Please contact me or James McDermott, Assistant Director, at (202) 512-8678 if you or your staff have any questions concerning this report. Other key contributors to this report are James Angell, Thomas Conahan, Tonita W. Gillich, Lindsay Huot, and Desiree Whipple. The objectives of this report are to (1) identify internationally recognized objectives for payment systems and central bank involvement in those systems, (2) describe the roles of central banks in the wholesale payment systems of other major industrialized countries and the key factors that influence those roles, and (3) describe the roles of central banks in the retail payment systems of other major industrialized countries and the key factors that influence those roles. In analyzing the roles of other central banks in payment systems, we focused on countries with relatively modern, industrialized economies. These countries included Australia, Canada, France, Germany, Japan, the United Kingdom, and the United States. To identify widely held public policy objectives for payment systems, we reviewed Core Principles for Systemically Important Payment Systems, which was developed by the Committee on Payment and Settlement Systems (CPSS), of the Bank for International Settlements. The CPSS established the Task Force on Payment System Principles and Practices in May 1998 to consider what principles should govern the design and operation of payment systems in all countries. The task force sought to develop an international consensus on such principles. The task force included representatives not only from G-10 central banks and the European Central Bank but also from 11 other national central banks of countries in different stages of economic development from all over the world and representatives from the International Monetary Fund and the World Bank. The task force also consulted groups of central banks in Africa, the Americas, Asia, the Pacific Rim, and Europe. We also reviewed materials available on the Web sites of the central banks we studied; these sites often included mission statements, basic data, and authorizing statutes. We reviewed a variety of legal analyses and commentaries to analyze those statutes. Where we make statements regarding to central banks’ authorizing statutes, they are based on these sources rather than on our original legal analysis. To describe the roles of central banks in the wholesale and retail payment systems of other major industrialized countries and the key factors that influence those roles, we reviewed materials available on central bank Web sites as well as other articles and publications from various central banks. We reviewed publications available from the Bank for International Settlements, and also the European Central Bank’s Blue Book: Payment and Securities Settlement Systems in the European Union. We also reviewed numerous articles and commentaries on the roles of central banks as well as discussions of recent reform efforts. To enhance our understanding of these materials, we interviewed Federal Reserve officials, members of trade associations, and officials from private-sector payment providers. We conducted our work in Washington, D.C., and New York, N.Y., between June 2001 and January 2002 in accordance with generally accepted government auditing standards. The core principles for systemically important payments systems (core principles) are shown in table 5. The responsibilities of the central bank in applying the core principles are as follows: The central bank should define clearly its payments objectives and should disclose publicly its role and major policies with respect to systemically important payments systems. The central bank should ensure that the systems it operates comply with the core principles. The central bank should oversee compliance with the core principles by systems it does not operate and should have the ability to carry out this oversight. The central bank, in promoting payment system safety and efficiency through the core principles, should cooperate with other central banks and with any other relevant domestic or foreign authorities. Different forms of settlement for wholesale payments result in different risks. Various wholesale payment systems in major industrialized countries use similar means to transmit and process wholesale payments. However, they sometimes use different rules for settling those transactions. In general, wholesale payments are sent over separate, secure, interbank electronic wire transfer networks and are settled on the books of a central bank. That is, settlement is carried out by exchange of funds held in banks’ reserve accounts at a central bank. However, various wholesale payment systems use different rules for settling these large-value payments. Some systems operate as real-time gross settlement (RTGS) systems, which continuously clear payment messages that are settled by transfer of central bank funds from paying banks to receiving banks. Other systems use net settlement rules, wherein the value of all payments due to and due from each bank in the network is calculated on a net basis before settlement. Each form of settling wholesale payments presents different risks to participants. Recently, some hybrid systems have been developed, building on the strengths and minimizing the risks associated with pure RTGS or netting systems. RTGS systems are gross settlement systems in which both processing and settlement of funds transfer instructions take place continuously, or in real time, on a transaction by transaction basis. RTGS systems settle funds transfers without netting debits against credits and provide final settlement in real time, rather than periodically at prespecified times. In most RTGS systems, the central bank, in addition to being the settlement agent, can grant intraday credit to help the liquidity needed for the smooth operation of these systems. Participants typically can make payments throughout the day and only have to repay any outstanding intraday credit by the end of the day. Because RTGS systems provide immediate finality of gross settlements, there is no systemic risk—that is, the risk that the failure to settle by one possibly insolvent participant would lead to settlement failures of other solvent participants due to unexpected liquidity shortfalls. However, as the entity guaranteeing the finality of each payment, the central bank faces credit risk created by the possible failure of a participant who uses intraday credit. In the absence of collateral for such overdrafts, the central bank assumes some amount of credit risk until the overdrafts are eliminated at the end of the day. In recent years, central banks have taken steps to more directly manage intraday credit, including collaterization requirements, caps on intraday credit, and charging interest on intraday overdrafts. Fedwire was established in 1918 as a telegraphic system and was the first RTGS system among the G-10 countries. Presently, account tallies are maintained minute-by-minute. The Federal Reserve Banks generally allow financially healthy institutions the use of daylight overdrafts up to a set multiple of their capital and may impose certain additional requirements, including collateral. In 1994, the Federal Reserve System began assessing a fee for the provision of this daylight liquidity. Other central banks have only recently adopted RTGS systems and have established a variety of intraday credit policies, such as intraday repurchase agreements, collateralized daylight overdrafts, and other policies. Other networks operate under net settlement rules. Under these rules, the value of all payments due to and due from each bank in the network is calculated on a net basis bilaterally or multilaterally. This occurs at some set interval—usually the end of each business day—or, in some newly developed systems, continuously throughout the day. Banks ending the day in a net debit position transfer reserves to the net creditors, typically using a settlement account at the central bank. Net settlement systems, with delayed or end of business day settlement, enhance liquidity in the payment system because such systems potentially allow payers to initiate a transaction without having the funds immediately on hand, but available pending final settlement. However, this can increase the most serious risk in netting systems, which is systemic risk. Recognizing that systemic risk is inherent in netting systems, central banks of the G-10 countries formulated minimum standards for netting schemes in the Lamfalussy Standards. The standards stress the legal basis for netting and the need for multilateral netting schemes to have adequate procedures for the management of credit and liquidity risks. Although netting arrangements generally reduce the need for central bank funds, they also expose the participants to credit risks as they implicitly extend large volumes of payment-related intraday credit to one another. This credit represents the willingness of participants to accept or send payment messages on the assumption that the sender will cover any net debit obligations at settlement. The settlement of payments, by the delivery of reserves at periodic, usually daily, intervals is therefore an important test of the solvency and liquidity of the participants. In recent years, central banks in countries using net settlement rules have taken steps to reduce credit risks in these systems as part of overall programs to reduce systemic risks. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 or (202) 512-7470 (automated answering system).
The central banks of major industrialized countries have agreed on common policy objectives and presented them in the Core Principles for Systematically Important Payment Systems. Intended to help promote safer and more efficient payment systems worldwide, the Core Principles outline specific policy recommendations for systematically important payment systems and describe the responsibilities of the central banks. All of the central banks GAO studied seek to ensure that their wholesale payment systems operate smoothly and minimize systemic risk. All of the central banks provide settlement services for their countries' wholesale payment systems. Some central banks also provide wholesale clearing services. Other central banks own the system but have little operational involvement in clearing, while others participate in partnerships with the private sector. All of the central banks GAO studied provide settlement for some retail payment systems. Some, but not all, central banks exercise regulatory authority over retail payment systems in their countries. Central banks also tend to have less operational involvement in countries where there is a relatively concentrated banking industry. In some cases, laws governing payments and the structure of the financial services industry direct the involvement of central banks in retail payment systems.
Emissions from a variety of human-generated sources, including commercial aircraft, trap heat in the atmosphere and contribute to climate change. During flight operations, aircraft emit a number of greenhouse gas and other emissions, including carbon dioxide, nitrogen oxides (NOx), soot, and water vapor. Figure 1 shows the primary emissions from commercial aircraft. Carbon dioxide emissions from aircraft are a direct result of fuel burn. For every gallon of jet fuel burned, about 21 pounds of carbon dioxide are emitted. Reducing the amount of fuel burned, therefore, also reduces the amount of carbon dioxide emitted. Water vapor emissions and certain atmospheric temperature and humidity conditions can lead to the formation of contrails, a cloudlike trail of condensed water vapor, and can induce the creation of cirrus clouds. Both contrails and cirrus clouds are believed to have a warming effect on the earth’s atmosphere. Aircraft also emit other pollutants that affect local air quality. Finally, airport operations are sources of greenhouse gas and other emissions, which we are not examining in this report. Historically, the commercial aviation industry has grown substantially in the United States and worldwide and is a contributor to economic growth. Between 1981 and 2008, passenger traffic increased 226 percent in the United States on a revenue passenger mile basis and 257 percent globally on a revenue passenger kilometer basis. According to the FAA, in 2006 the civil aviation industry in the United States directly and indirectly contributed 11 million jobs and 5.6 percent of total gross domestic product (GDP) to the U.S. economy. Globally, the International Air Transport Association estimated that in 2007 the aviation industry had a global economic impact of over $3.5 trillion, equivalent to about 7.5 percent of worldwide GDP. Recently, however, the airline industry has experienced declining traffic and financial losses as the result of the current recession. The fuel efficiency of commercial jet aircraft has improved over time. According to IPCC, aircraft today are about 70 percent more fuel efficient on a per passenger kilometer basis than they were 40 years ago because of improvements in engines and airframe design. The cost of jet fuel is a large cost for airlines. In the 2008, when global fuel prices were high, jet fuel accounted for about 30 percent of U.S. airlines’ total operating expenses, compared with 23 percent during 2007. Fuel efficiency (measured by available seat-miles per gallon consumed) for U.S. carriers increased about 17 percent between 1990 and 2008, as shown in figure 2. Internationally, according to the International Air Transport Association, fuel efficiency (measured by revenue passenger kilometers) improved 16.5 percent between 2001 and 2007. According to FAA, between 2000 and early 2008 U.S. airlines reduced fuel burn and emissions while transporting more passengers and cargo. In addition, commercial aviation has become less energy intensive over time—that is, to transport a single passenger a single mile uses less energy than it previously did, measured in British thermal units. See figure 3 showing energy intensity over time of aviation and other modes of transportation. However, despite these efficiency improvements, overall fuel burn and emissions of U.S. airlines are expected to grow in the future. FAA forecasts that between 2008 and 2025 fuel consumption of U.S.-based airlines will increase an average of 1.6 percent per year while revenue passenger miles will increase an average of 3.1 percent per year over the same period. As seen in figure 4, FAA forecasts that between 2008 and 2025 fuel consumption of U.S.-based airlines will increase an average of 1.6 percent per year. To develop a better understanding of the effects of human-induced climate change and identify options for adaptation and mitigation, two United Nations organizations established IPCC in 1988 to assess scientific, technical, and socio-economic information on the effects of climate change. IPCC releases and periodically updates estimates of future greenhouse gas emissions from human activities under different economic development scenarios. In 1999, IPCC released its report, Aviation and the Global Atmosphere, conducted at the request of the International Civil Aviation Organization (ICAO)—a United Nations organization that aims to promote the establishment of international civilian aviation standards and recommended practices and procedures. In 2007, IPCC released an update on emissions from transportation and other sectors called the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. These reports were developed with input from over 300 experts worldwide and are internationally accepted and used for policy-making. A variety of federal agencies have roles in addressing aviation emissions. In 2004, FAA and other organizations including the National Aeronautics and Space Administration (NASA) released a report, Aviation and the Environment: A National Vision Statement, Framework for Goals and Recommended Actions, through the collaborative PARTNER program, stating a general goal to reduce overall levels of emissions from commercial aviation and proposing actions to deal with aviation emissions. FAA also is involved in a number of emissions-reduction initiatives—including work on low-emissions technologies and low-carbon alternative fuels; the implementation of a new air traffic management system, the Next Generation Air Transportation System (NextGen); and climate research to better understand the impact of emissions from aviation. NASA has been involved in research that has led to the development of technologies that reduce aircraft emissions. Currently, NASA’s Subsonic Fixed-Wing project, part of its Fundamental Aeronautics program, aims to help develop technologies to reduce fuel burn, noise, and emissions in the future. Both FAA and NASA are involved in the Aviation Climate Change Research Initiative, whose goals include improving the scientific understanding of aviation’s impact on climate change. Also, as mandated under Title II of the Clean Air Act, the Environmental Protection Agency (EPA) promulgates certain emissions standards for aircraft and aircraft engines and has adopted emission standards matching those for aircraft set by ICAO. While neither ICAO nor EPA has established standards for aircraft engine emissions of carbon dioxide, ICAO is currently discussing proposals for carbon dioxide emissions standards and considering a global goal for fuel efficiency. In addition, in 2007 a coalition of environmental interest groups filed a petition with EPA asking the agency, pursuant to the Clean Air Act, to make a finding that “greenhouse gas emissions from aircraft engines may be reasonably anticipated to endanger the public health and welfare” and, after making this endangerment finding, promulgate regulations for greenhouse gas emissions from aircraft engines. International concerns about the contribution of human activities to global climate change have led to several efforts to reduce their impact. In 1992, the United Nations Framework Convention on Climate Change (UNFCCC)—a multilateral treaty whose objective is to stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous human interference with the climate system—was signed. By 1995, the parties to the UNFCCC, including the United States, realized that progress toward this goal was not sufficient. In December 1997, the parties reconvened in Kyoto, Japan, to adopt binding measures to reduce greenhouse gas emissions. Under the resulting Kyoto Protocol, which the United States has not ratified, industrialized nations committed to reduce or limit their emissions of carbon dioxide and other greenhouse gases during the 2008 through 2012 commitment period. The Protocol directed the industrialized nations to work through ICAO to reduce or limit emissions from aviation, but international aviation emissions are not explicitly included in Kyoto’s targets. In 2004, ICAO endorsed the further development of an open emissions trading system for international aviation, and in 2007 called for mutual agreement between contracting states before implementation of an emissions trading scheme. In part to meet its Kyoto Protocol requirements, the EU implemented its ETS in 2005, which sets a cap on carbon dioxide emissions and allows regulated entities to buy and sell emissions allowances with one another. In 2008, the European Parliament and the Council of the European Union passed a directive, or law, to include aviation in the ETS. Under the directive, beginning in 2012 a cap will be placed on total carbon dioxide emissions from all covered flights by aircraft operators into or out of an EU airport. Many stakeholders and countries have stated objections to the EU’s plans and legal challenges are possible. (See app. I for a discussion of the ETS’s inclusion of aviation.) In December 2009, the parties to the UNFCCC will convene in Copenhagen, Denmark, to discuss and negotiate a post-Kyoto framework for addressing global climate change. IPCC estimates that aviation emissions currently account for about 2 percent of global human-generated carbon dioxide emissions and about 3 percent of the radiative forcing of all global human-generated emissions (including carbon dioxide) that contribute to climate change. On the basis of available data and assumptions about future conditions, IPCC forecasted emissions to 2015 and forecasted three scenarios—low, medium, and high—for growth in global aviation carbon dioxide emissions from 2015 to 2050. These scenarios are driven primarily by assumption about economic growth—the factor most closely linked historically to the aviation industry’s growth—but they also reflect other aviation-related assumptions. Because IPCC’s forecasts depend in large part on assumptions, they, like all forecasts, are inherently uncertain. Nevertheless, as previously noted, IPCC’s work reflects the input of over 300 leading and contributing authors and experts worldwide and is internationally accepted and used for policy making. According to IPCC, global aviation contributes about 2 percent of the global carbon dioxide emissions caused by human activities. This 2 percent estimate includes emissions from all global aviation, including both commercial and military. Global commercial aviation, including cargo, accounted for over 80 percent of this estimate. In the United States, domestic aviation contributes about 3 percent of total carbon dioxide emissions, according to EPA data. Many industry sectors, such as the electricity-generating and manufacturing sectors, contribute to global carbon dioxide emissions, as do residential and commercial buildings that use fuel and power. The transportation sector also contributes substantially to global carbon dioxide emissions. Specifically, it accounts for about 20 percent of total global carbon dioxide emissions. Road transportation accounts for the largest share of carbon dioxide emissions—74 percent—from the transportation sector; aviation accounts for about 13 percent of carbon dioxide emissions from all transportation sources; and other transportation sources, such as rail, account for the remaining 13 percent. Figure 5 shows the relative contributions of industry, transportation, and all other sources to global carbon dioxide emissions and breaks down transportation’s share to illustrate the relative contributions of road traffic, aviation, and other transportation sources. When other aviation emissions—such as nitrogen oxides, sulfate aerosols, and water vapor—are combined with carbon dioxide, aviation’s estimated share of global emissions increases from 2 percent to 3 percent, according to IPCC. However, the impact of these other emissions on climate change is less well understood than the impact of carbon dioxide, making IPCC’s combined estimate more uncertain than its estimate for carbon dioxide alone. Aviation emissions may contribute directly or indirectly to climate change. Although most aviation emissions have a warming effect, sulfate aerosols and a chemical reaction involving methane have a cooling effect. The warming effect is termed “positive radiative forcing” and the cooling effect “negative radiative forcing.” Aviation emissions also may contribute to the formation of cirrus clouds, which can cause atmospheric warming, but the scientific community does not yet understand this process well enough to quantify the warming effect of aviation-induced cirrus clouds. Table 1 describes the direct or indirect effects of aviation emissions on climate change. According to IPCC, when the positive radiative forcing effects of carbon dioxide and the positive and negative radiative forcing effects of other aviation emissions are combined, global aviation contributes about 3 percent of human-generated positive radiative forcing. When the radiative forcing effects of the various aviation emissions are considered carbon dioxide, nitrogen oxides, and contrails have the greatest poten to contribute to climate change. The level of scientific understanding about the impact of particular aviation emissions on radiative forcing varies, making estimates of their impact on climate change uncertain to varying degrees. A recent report that described levels of scientific understanding of aviation emissions found that the levels for carbon dioxide were high; the levels for nitrogen oxides, water vapor, sulfates, and soot were medium; and the levels for contrails and aviation-induced cirrus clouds were low. Aviation’s contribution to total emissions, estimated at 3 percent, could be as low as 2 percent or as high as 8 percent, according to IPCC. Figure 6 shows IPCC’s estimate of the relative positive radiative forcing effects of each type of aviation emission for the year 2000. The overall radiative forcing from aviation emissions is estimated to be approximately two times that of carbon dioxide alone. IPCC generated three scenarios that forecasted the growth of global aviation carbon dioxide emissions from the near-term (2015) to the long- term (2050) and described these scenarios in its 1999 report. These forecasts are generated by models that incorporate assumptions about future conditions, the most important of which are assumptions about global economic growth and related increases in air traffic. Other assumptions include improvements in aircraft fuel efficiency and air traffic management and increases in airport and runway capacity. Because the forecasts are based on assumptions, they are inherently uncertain. Historically, global economic growth has served as a reliable indicator of air traffic levels. Aviation traffic has increased during periods of economic growth and slowed or decreased during economic slowdowns. As figure 7 shows, U.S and global passenger traffic (including the U.S.) generally trended upward from 1978 through 2008, but leveled off or declined during economic recessions in the United States. Forecast models described in IPCC’s report incorporate historical trends and the relationship between economic growth and air traffic to produce scenarios of global aviation’s potential future carbon dioxide emissions. IPCC used a NASA emissions forecast for carbon dioxide emissions until 2015. IPCC used an ICAO emissions forecasting model to forecast emissions from 2015 to 2050 using three different assumptions for global economic growth—low (2.0 percent), medium (2.9 percent), and high (3.5 percent). As a result, IPCC produced three different potential scenarios for future air traffic and emissions. The 2050 scenarios include a 40 percent to 50 percent increase in fuel efficiency by 2050 from improvements in aircraft engines and airframe technology and from deployment of an advanced air traffic management system (these are discussed in more detail below). Figure 8 shows IPCC’s low-, mid-, and high-range scenarios for carbon dioxide emissions for 2015, 2025, and 2050 as a ratio over 1990 emissions. IPCC used the medium economic growth rate scenario to estimate aviation’s contribution to overall emissions in 2050. IPCC compared aviation and overall emissions for the future and found that global aviation carbon dioxide emissions could increase at a greater rate than carbon dioxide emissions from all other sources of fossil fuel combustion. For example, for the medium GDP growth rate scenario, IPCC assumed a 2.9 percent annual average increase in global GDP, which translated into almost a tripling (a 2.8 times increase) of aviation’s global carbon dioxide emissions from 1990 to 2050. For the same medium GDP growth scenario, IPCC also estimated a 2.2 times increase of carbon dioxide emissions from all other sources of fossil fuel consumption worldwide during this period. Over all, using the midrange scenario for global carbon dioxide emissions and projections for emissions from other sources, IPCC estimated that in 2050, carbon dioxide emissions from aviation could be about 3 percent of global carbon dioxide emissions, up from 2 percent. IPCC further estimated that, when other aviation emissions were combined with carbon dioxide emissions, aviation would account for about 5 percent of global human-generated positive radiative forcing, up from 3 percent. IPCC concluded that the aviation traffic estimates for the low-range scenario, though plausible, were less likely given aviation traffic trends at the time the report was published in 1999. IPCC’s 2007 Fourth Assessment Report included two additional forecasts of global aviation carbon dioxide emissions for 2050 developed through other studies. Both of these studies forecasted mid- and high-range aviation carbon dioxide emissions for 2050 that were within roughly the same range as the 1999 IPCC report’s forecasts. For example, one study using average GDP growth assumptions that were similar to IPCC’s showed mid- and high-range estimates that were close to IPCC’s estimates. In 2005, FAA forecasted a 60 percent growth in aviation carbon dioxide and nitrogen oxide emissions from 2001 to 2025. However, FAA officials recently noted that this estimate did not take into account anticipated aircraft fleet replacements, advances in aircraft and engine technology, and improvements to the air transportation system, nor did it reflect the recent declines in air traffic due to the current recession. After taking these factors into account, FAA reduced its estimate in half and now estimates about a 30 percent increase in U.S. aviation emissions from 2001 to 2025. To account for some uncertainties in FAA’s emissions forecasting, FAA officials said they are working on creating future scenarios for the U.S. aviation sector to assess the influence of a range of technology and market assumptions on future emissions levels. While recent aviation forecasts are generally consistent with IPCC’s expectation for long-term global economic growth, the current economic slowdown has led to downward revisions in growth forecasts. For example, in 2008, Boeing’s annual forecast for the aviation market projected a 3.2 percent annual global GDP growth rate from 2007 to 2027. However, this estimate was made before the onset of negative global economic growth in 2009 and could be revised downward in Boeing’s 2009 forecast. According to FAA’s March 2009 Aerospace Forecast, global GDP, which averaged 3 percent annual growth from 2000 to 2008, will be 0.8 percent from 2008 to 2010 before recovering to an estimated average annual growth rate of 3.4 percent from 2010 to 2020. The International Air Transport Association has predicted that global air traffic will decrease by 3 percent in 2009 with the economic downturn. Moreover, according to the association, even if air traffic growth resumes in 2010, passenger air traffic levels will be 12 percent lower in the first few years after the slowdown and 9 percent lower in 2016 than the association forecasted in late 2007. To the extent that air traffic declines, emissions also will decline. In developing its forecasts, IPCC made assumptions about factors other than economic growth that also affected its for experts we interviewed, and FAA have noted: ecast results, as IPCC itself, IPCC assumed that advances in aircraft technology and the introduction new aircraft would increase fuel efficiency by 40 percent to 50 percent from 1997 through 2050. IPCC assumed that an ideal air traffic management system w place worldwide by 2050, reducing congestion and delays. However, the forecast doesn’t account for the possibility that some airlines might adopt low-carbon alternative fuels. IPCC assumed that airport and runway capacity would be sufficient to accommodate future air traffic levels. However, if IPCC’s assumptions about improvements in fuel efficiency and air traffic management are not realized, aircraft could produce higher emissions levels than IPCC estimated and IPCC’s estimates would be understated. Conversely, if airports and runways have less capacity than IPCC assumed, then air traffic levels could be lower and, according to IPCC and some experts, IPCC’s forecast could overstate future aviation emissions. Finally, IPCC pointed out that its estimate that aviation will contribute 5 percent of positive radiative forcing in 2050 does not include the potential impact of aviation-induced cirrus clouds, which could be substantial. Because IPCC’s forecasts are based on assumptions about future conditions and scientific understanding of the radiative forcing effects of certain aviation emissions is limited, IPCC’s forecasts are themselves uncertain. According to FAA officials, given the numerous assumptions and inherent uncertainties involved in forecasting aviation emissions levels out to the year 2050, along with the significant shocks and structural changes the aviation community has experienced over the last few years, IPCC’s projections are highly uncertain, even for the midrange scenario. If emissions from aviation and all other sectors continue to grow at about the same relative rate, aviation’s contribution as a portion of overall emissions will not change significantly. However, if significant reductions are made in overall emissions from other sources and aviation emission levels continue to grow, aviation’s contribution could grow. According to experts we interviewed, a number of different technological and operational improvements related to engines, aircraft design, operations, next-generation air traffic management, and fuel sources are either available now or are anticipated in the future to help reduce carbon dioxide emissions from aircraft. We interviewed and surveyed 18 experts in the fields of aviation and climate change and asked them to assess a number of improvements to reduce emissions using a variety of factors, such as potential costs and benefits, and then used the results to inform the following discussion. (Complete survey results can be found in app. III.) The development and adoption of low-emissions technologies is likely to be dependent upon fuel prices or any government policies that price aircraft emissions. Higher fuel prices or prices on emissions—for example through government policies such as an emissions tax—would make the costs of low-emissions technologies relatively cheaper and are likely to encourage their development. In addition, while fuel efficiency and emissions reductions may be important to airlines, so are a number of other factors, including safety, performance, local air quality, and noise levels, and trade-offs may exist between these factors. Improvements to aircraft engines have played a primary role in increasing fuel efficiency and reducing engine emission rates; experts we interviewed expect them to do so in the future—one study estimates that 57 percent of improvements in aircraft energy intensity between 1959 and 1995 were due to improvements in engine efficiency. Such improvements have resulted from increasing engine pressure and temperatures (which increases their efficiency and decreases fuel usage) and improving the “bypass ratio,” a measure of airflow through the engine. However, according to experts we surveyed, further advances in these technologies may face hi gh development costs (see table 2), and some may not be available for commercial use any time soon because engineers still face challenges in improving engine technology. Some technologies may be available sooner than others, but all present a range of challenges and tradeoffs: One latest-generation aircraft engine, the geared turbofan engine, is likely to be available for use in certain aircraft in the next few years; promises to reduce emissions according to its manufacturer, Pratt & Whitney; and may face few challenges to widespread adoption. According to Pratt & Whitney, this engine design is estimated to reduce fuel burn and emissions by 12 percent, compared with similar engines now widely used, in part due to an increase in the engine’s bypass ratio. The geared turbofan engine is the result of research conducted by NASA and Pratt & Whitney. Another engine technology, which could be introduced in the next 5 to 15 years, is the “open rotor” engine. It may deliver even greater emissions reductions but may face consumer-related challenges. The open rotor engine holds the engine fan blades on the outside of the engine case, thereby increasing the air flow around the engine, the effective bypass ratio, and the efficiency of the engine’s propulsion. However, this engine may be noisy and its large, visible engine blades could raise consumer concerns according to experts we surveyed. Research in the United States is currently a joint effort of NASA and General Electric. Rolls-Royce is also pursuing this technology. In the longer term, despite some engineering challenges, distributed propulsion technologies also hold promise for reducing aircraft emissions. Distributed propulsion systems would place many small engines throughout an aircraft instead of using a few large engines, as today’s aircraft do. Experts we interviewed said that engineering challenges must be overcome with distributive propulsion, including determining the best and most efficient way to distribute power and store fuel. NASA is currently involved in distributed propulsion research. Aircraft improvements also have played a role in reducing emissions rates in the past and experts we interviewed expected them to continue to do so. Through improvements in materials used to build aircraft and other improvements that increase aerodynamics and reduce drag, aircraft have become more fuel efficient over time. In the short term, improvements in aircraft materials, leading to decreased weight, and improvements in aerodynamics will help reduce fuel consumption and, thus, emissions rates. In the longer term, new aircraft designs, primarily a blended wing- body aircraft, hold potential for greater reductions in emissions rates. However, new aircraft concepts face engineering and consumer acceptance challenges and new technologies are likely to incur high development costs (see table 3). The following improvements to aircraft should help reduce aircraft fuel consumption and emissions in the long term, despite costs and challenges: The use of lightweight composite materials in aircraft construction has led to weight and fuel burn reductions in the past and is expected to continue to do so in the future. Over time, aircraft manufacturers have increasingly replaced more traditional materials such as aluminum with lighter-weight composite materials in airframe construction. For example, according to Boeing, 50 percent of the weight of the airframe of the Boeing 787, expected to be released in 2010, will be attributable to composite materials, compared with 12 percent composites in a currently available Boeing 777. According to Airbus, it first began using composite materials in airframe construction in 1985, and about 25 percent of the airframe weight of an A380 manufactured in 2008 was attributable to composites. By reducing the weight of the airframe, the use of composites reduces aircraft weight, fuel burn, and emissions rates. Retrofits such as winglets—wing extensions that reduce drag—can be made to aircraft to make them more aerodynamic but may have limited potential for future emissions reductions according to experts we surveyed. By improving airflow around wings, winglets reduce drag and improve fuel efficiency, thus reducing emissions by a modest amount. Boeing estimates that the use of winglets on a 737 reduces fuel burn by 3.5 percent to 4 percent on trips of over 1,000 nautical miles. Many new aircraft can be purchased with winglets, and existing aircraft also can be retrofitted with them. However winglets have already become very common on U.S. commercial airline aircraft and provide limited benefit for short-haul flights. According to experts we surveyed, there is low potential for future fuel consumption and emissions reductions from winglets. Redesigned aircraft, such as a blended wing-body aircraft—that is, an aircraft in which the body and wings are part of one airframe—hold greater potential for reducing emissions, according to experts we surveyed, though these face challenges as well. Several public and private organizations, including NASA and Boeing are conducting research on such aircraft. Many experts expect that blended wing-body aircraft will reduce emissions through improved aerodynamics and lighter weight. Estimates for potential emissions reductions include 33 percent compared with currently available aircraft according to NASA. However, these new designs face challenges; notably, according to experts we interviewed, development costs are likely to be substantial, their radically different appearance may pose consumer acceptance issues, and they may require investments in modifying airports. Airlines have already taken a number of steps to improve fuel efficiency over time; however, the potential for future improvements from these measures may be limited. Airlines have increased their load factors (the percentage of seats occupied on flights), increasing the fuel efficiency of aircraft on a per-passenger basis. Load factors were about 80 percent for U.S. carriers in 2008, compared with about 65 percent in 1995. However, some experts we interviewed said the potential for additional future emissions reductions from increasing load factors may be small because they are already so high. Airlines also have removed many unnecessary items from aircraft and minimized supplies of certain necessary items, such as water, carried on board. As a result, according to some experts we interviewed, there may be little additional improvement in reducing emissions by reducing on-board weight. Airlines also have made other voluntary operational changes to reduce emissions, such as reducing speeds on certain routes, which reduces fuel use, and washing aircraft engines to make them cleaner and more efficient. Airlines also have retired less-fuel-efficient aircraft and replaced them with more-fuel-efficient models. For example, in 2008, American Airlines announced it was replacing more of its fuel-inefficient MD-80 aircraft with more efficient Boeing 737-800 aircraft. In addition, Continental Airlines, in 2008, replaced regional jets with turboprop planes on many routes. Still other improvements also are available for airlines to reduce emissions in the future, but the experts we interviewed ranked the potential for emissions reductions and consumer acceptance of these improvements as low (see table 4). Airlines could make other operational changes to reduce fuel burn and emissions but are unlikely to do so, because the potential for consumer acceptance of such changes is low according to experts we surveyed. For example, aircraft could fly in formation to improve airflow and reduce fuel burn. More specifically, rather than flying individually, several aircraft could fly in proximity to one another, reducing drag of aircraft and subsequently fuel use. However, aircraft would fly closer to one another than FAA’s regulations currently allow and additional technological and aerodynamics research needs to be done. Another potential option, currently used for military purposes, is air-to-air refueling. Under this option, aircraft would be fueled in flight by tanker aircraft, reducing the amount and weight of fuel needed for the flight. However, DOT staff told us that air-to-air refueling may pose safety risks similar to those posed by formation flying. Some experts also have suggested that airlines make in- route on-ground fueling stops on long-haul flights, so they could reduce the amount of fuel they carry. However, more fueling stops could have negative effects on air quality at airports used for these stops as well as on air traffic operations. According to FAA, some of the air traffic management improvements that are part of NextGen—the planned air traffic management system designed to address the impacts of future traffic growth—can help reduce aircraft fuel consumption and emissions in the United States. Besides improving air traffic management, NextGen has environmental goals, which include accelerating the development of technologies that will lower emissions and noise. According to FAA, it is conducting a review to develop a set of NextGen goals, targets and metrics for climate change, as well as for noise and local air quality emissions. NextGen has the potential to reduce aircraft fuel burn by 2025, according to FAA, in part through technologies and procedures that reduce congestion and create more direct routing. Some procedures and technologies of NextGen have already been implemented and have already led to emissions reductions. Similarly, in Europe through the Single European Sky Air Traffic Management Research Program (SESAR), air traffic management technologies and procedures will be upgraded and individual national airspace systems will be merged into one, helping to reduce emissions per flight by 10 percent according to EUROCONTROL, the European Organization for the Safety of Air Navigation. However, some experts we met with said that because some of SESAR’s technologies and procedures have already been implemented, future fuel savings might be lower. Table 5 provides information on selected components of NextGen that hold potential for reducing aircraft emissions. NextGen has the potential to reduce fuel consumption and emissions through technologies and operational procedures: NextGen makes use of air traffic technologies to reduce emissions. For example, the Automatic Dependent Surveillance-Broadcast (ADS-B) satellite navigation system is designed to enable more precise control of aircraft during flight, approach, and descent, allowing for more direct routing and thus reducing fuel consumption and emissions. Also, Area Navigation (RNAV) will compute an aircraft’s position and ground speed and provide meaningful information on the flight route to pilots, enabling them to save fuel through improved navigational capability. NextGen Network-Enabled Weather will provide real-time weather data across the national airspace system, helping reduce weather-related delays and allowing aircraft to best use weather conditions to improve efficiency. NextGen also relies on operational changes that have demonstrated the potential to reduce fuel consumption and emissions rates. Continuous Descent Arrivals (CDA) allow aircraft to remain at cruise altitudes longer as they approach destination airports, use lower power levels, and therefore produce lower emissions during landings. CDAs are already in place in a number of U.S. airports and according to FAA, the use of CDAs at Atlanta Hartsfield International Airport reduces carbon dioxide emissions by an average of about 1,300 pounds per flight. Required Navigation Performance (RNP) also permits an aircraft to descend on a more precise route, reducing its consumption of fuel and lowering its carbon dioxide emissions. According to FAA, over 500 RNAV and RNP procedures and routes have been implemented. Funding and other challenges, however, affect FAA’s implementation of these various NextGen procedures and technologies. The use of alternative fuels, including those derived from biological sources (biofuels), has the potential to reduce greenhouse gas emissions from aircraft in the future; however, these fuels also present a number of challenges and environmental concerns. While the production and use of biofuels result in greenhouse gas emissions, the extent to which they provide a reduction in greenhouse gas emissions depends on whether their emissions on an energy-content basis are less than those resulting from the production and use of fossil fuels. To date, some assessments of biofuels have shown a potential reduction in greenhouse gas emissions when compared with fossil fuels, such as jet fuel. However, researchers have not agreed on the best approach for determining the greenhouse gas effects of biofuels and the magnitude of any greenhouse gas reductions attributable to their production and use. FAA, EPA, and U.S. Air Force officials we met with said that quantifying the life-cycle emission of biofuels is difficult, but work in this area is currently under way. For example, according to EPA, the agency has developed a comprehensive methodology to determine the life-cycle emissions, including both direct and indirect emissions, of a range of biofuels. This methodology, which involved extensive coordination with experts outside of and across the federal government, was included in the recent notice of proposed rulemaking on the renewable fuel standard. Non-oil-energy sources, such as hydrogen, have potential for providing energy for ground transport, but many experts we met with said that such sources are unlikely to have use for commercial aircraft given technological, cost, and potential safety issues. According to experts we interviewed, a variety of sources could be used to produce biofuels for aircraft, including biomasses such as switchgrass and forest and municipal waste; and oils from jatropha (a drought-resistant plant that can grow in marginal soil), algae, camelina (a member of the mustard family that can grow in semiarid regions), palm, and soy. However, many experts claim that some of these crops are unsuitable for use as biofuels because they may have negative environmental and economic consequences, such as potentially reducing the supply and quality of water, reducing air quality and biodiversity, and limiting global food supplies. For example, cultivating palm for biofuel production might lead to deforestation, thereby increasing both greenhouse gas emissions and habitat loss. In addition, jatropha has been identified as an invasive species in some regions and, because of its aggressive growth, may have the potential to reduce available habitat for native species. According to experts we met with, algae, on the other hand, are seen as a potentially viable source: they can be grown using saltwater and in a variety of other environments. In addition, according to DOT, camelina appears to be a potential biofuel source in the short term as it is not currently used for food and uses limited water for development. However, many experts we interviewed raised questions about the availability of future supplies of biofuels. According to the experts, large investments in fuel production facilities will likely be needed because little industrial capacity and compatible infrastructure currently exist to create biofuels. The cost of current algae conversion technology has, for example raised obstacles to the commercial-scale production needed to obtain significant supplies in the future. Given that future alternative fuels will have many uses, airlines will compete with other sources, including road transportation, for those limited supplies. Compared with the market for ground transport, the market for fuels for commercial aviation is small, leading some experts to believe that fuel companies are more likely to focus their biofuel efforts on the ground transport market than on the commercial aviation market. Some experts we met with said that given the relatively small size of the market, limited biofuel supplies should be devoted to road transportation since road transportation is the largest contributor of emissions from the transportation sector. A large number of industry and government participants, including airlines, fuel producers, and manufacturers, are currently conducting research and development on alternative fuels for aircraft. One effort is the Commercial Aviation Alternative Fuels Initiative, whose members include FAA, airlines, airports, and manufacturers. The goal of this initiative is to “promote the development of alternative fuels that offer equivalent levels of safety and compare favorably with petroleum-based jet fuel on cost and environmental bases, with the specific goal of enhancing security of energy supply.” Any developed biofuel will be subject to the same certification as petroleum-based jet fuel to help ensure its safety. In addition, other government efforts are under way, most notably the Biomass Research and Development Initiative. This initiative is a multiagency effort to coordinate and accelerate all federal biobased products and bioenergy research and development. The Department of Transportation is one of the initiative’s participants. Finally, the aviation industry has conducted a number of test flights using a mixture of biofuels and jet fuel. These test flights have demonstrated that fuel blends containing biofuels have potential for use in commercial aircraft. In February 2008, Virgin Atlantic Airlines conducted a demonstration flight of a Boeing 747 fueled by a blend of jet fuel (80 percent) and coconut- and babassu-oil-based fuels (20 percent). In December 2008, Air New Zealand conducted a test flight of a Boeing 747 fueled by a blend consisting of an equal mixture of jet fuel and jatropha oil. In January 2009, Continental Airlines conducted a test flight using a fuel blend of 50 percent jet fuel, and a jatropha and algae biofuel blend on a Boeing 737. In January 2009, Japan Airlines conducted a test flight of a Boeing 747 fueled by a blend including camelina oil. According to the airlines, the results of all these tests indicate that there was no change in performance when engines were fueled using the biofuel blends. For example, the pilot of the Air New Zealand test flight noted that both on- ground and in-flight tests indicated that the aircraft engines performed well while using the biofuel. Future fuel prices are likely to be a major factor in influencing the development of low-emissions technologies for commercial aviation. According to the airline industry, fuel costs provide an incentive for airlines to reduce fuel consumption and emissions. However, according to some experts we interviewed, short-term increases in fuel prices may not provide enough of an incentive for the industry to adopt certain low- emission improvements. For example, the commercial airlines would have greater incentive to adopt fuel saving technologies if the projected fuel savings are greater than the improvement’s additional life-cycle cost. The higher existing and projected fuel prices are, the more likely airlines would undertake such improvements, all else the same. One expert said that if fuel costs were expected to consistently exceed $140 per barrel in the future, much more effort would be made to develop a finished open rotor engine quickly. The price of fuel as a factor in providing an incentive for the development and adoption of low-emission technologies is seen in some historical examples in NASA research. While winglets were first developed through a NASA research program in the 1970s, they were not used commercially until a few years ago when higher fuel prices justified their price. Additionally, although NASA currently is sponsoring research into open rotor engines, the agency also did so in the 1980s in response to high fuel prices. That research was discontinued before the technology could be matured, however, when fuel prices dropped dramatically in the late 1980s. In addition, the current economic recession has impacted commercial airlines and may cause some airlines to cut back on purchases of newer and more fuel-efficient aircraft. For example, the U.S. airline industry lost about $3.7 billion in 2008, and while analysts are uncertain about its profitability 2009, some analysts predict industry profits of around $4 billion to $10 billion. In addition, Boeing has reported a number of recent cancellations of orders for the fuel-efficient 787 Dreamliner. According to one expert we met with, when airlines are low on cash, they are unlikely to undertake improvements that will reduce their fuel consumption and emissions, even if the savings from fuel reductions will ultimately be greater than the cost of the improvement because they have so little cash. This expert said, for example, that although it may make financial sense for airlines to engage in additional nonsafety-related engine maintenance to reduce fuel burn and emissions, they may not do so because they lack sufficient cash. Although some airlines may adopt technologies to reduce their future emissions, these efforts may not be enough to mitigate the expected growth in air traffic and related increase in overall emissions through 2050. Although IPCC’s forecast, as mentioned earlier, assumes future technological improvements leading to annual improvements in fuel efficiency, it excludes or doesn’t account for the possibility that some airlines might adopt biofuels or other potential breakthrough technologies. Nonetheless, even if airlines adopt such technologies, some experts believe that emissions will still be higher in 2050 under certain conditions than they were in 2000. One expert we met with did a rough estimate of future emissions from aircraft assuming the adoption of many low-carbon technologies such as blended wing-body, operational improvements, and biofuels. He used IPCC’s midrange forecast of emissions to 2050 as a baseline for future traffic and found that even assuming the introduction of these technologies, global emissions in 2050 would continue to exceed 2000 emissions levels. Had a lower baseline of emissions been used, forecasted emissions may have been lower. He acknowledged that more work needs to be done in this area. Another study by a German research organization modeled future emissions assuming the adoption of technological improvements, as well as biofuels, to reduce emissions. This study assumed future traffic growth averaging 4.8 percent between 2006 and 2026 and 2.6 percent between 2027 and 2050. While this study forecasted improvements in emissions relative to expected market growth, it estimated that by 2050 total emissions would still remain greater than 2000 emissions levels. Governments have a number of policy options—including policies that set a price on emissions, market-based measures like a cap-and-trade program or a tax, regulatory standards, and funding for research and development—they could use to help reduce greenhouse gas emissions from commercial aviation and other sectors of the economy. The social benefits (for example, resulting from emissions reductions) and costs associated with each option vary, and the policies may affect industries and consumers differently. However, economic research indicates that market-based policies are more likely to better balance the benefits and costs of achieving reductions in greenhouse gases and other emissions (or, in other words, to be more economically efficient). In addition, research and development spending could complement market-based measures or standards to help facilitate the development and deployment of low- emissions technologies. However, given the relatively small current and forecasted percentage of global emissions generated by the aviation sector, actions taken to reduce aviation emissions alone, and not emissions from other sectors, could be costly and have little potential impact on reducing global greenhouse gas emissions. Economists and other experts we interviewed stated that establishing a price on greenhouse gas emissions through market-based policies, such as a cap-and-trade program or a tax on emissions from commercial aircraft and other sources, would provide these sources with an economic incentive to reduce their emissions. Generally, a cap-and-trade program or an emissions tax (for example, on carbon dioxide) can achieve emissions reductions at less cost than other policies because they would give firms and consumers the flexibility to decide when and how to reduce their emissions. Many experts we surveyed said that establishing a price on emissions through a cap-and-trade program or a tax would help promote the development and adoption of a number of low-emissions technologies for airlines, including open rotor engines and blended wing-body aircraft. Another market-based policy, subsidy programs, such as a payment per unit of emissions reduction, can in principle provide incentives for firms and consumers to reduce their greenhouse gas emissions. However, subsidy programs need to be financed—for example through existing taxes or by raising taxes—and can create perverse incentives resulting in higher emissions. One market-based option for controlling emissions is a cap-and-trade program. Also known as an emissions trading program, a cap-and-trade program would limit the total amount of emissions from regulated sources. These sources would receive, from the government, allowances to emit up to a specific limit—the “cap.” The government could sell the allowances through an auction or provide them free of charge (or some combination of the two). In addition, the government would establish a market under which the regulated sources could buy and sell allowances with one another. Sources that can reduce emissions at the lowest cost could sell their allowances to other sources with higher emissions reduction costs. In this way, the market would establish an allowance price, which would represent the price of carbon dioxide (or other greenhouse gas) emissions. Generally, according to economists, by allowing sources to trade allowances, policy makers can achieve emissions reductions at the lowest cost. A cap-and-trade program can be designed to cap emissions at different points in the economy. For example, a cap-and-trade program could be designed to cap “upstream” sources like fuel processors, extractors, and importers. Under this approach, a cap would be set on the emissions potential that is inherent in the fossil fuel. The upstream cap would restrain the supply and increase the prices of fossil fuels and thus the price of jet fuel relative to less carbon-intensive alternatives. Alternatively, under a “downstream” program, direct emitters, such as commercial airlines, would be required to hold allowances equal to their total carbon emissions each year. (See fig. 9.) However, economic research indicates that both types of programs would provide commercial airlines with an incentive to reduce their fuel consumption in the most cost-effective way for each airline, such as by reducing weight, consolidating flights, or using more fuel-efficient aircraft, if they were included in such a program. To the extent that airlines would pass along any program costs to customers through higher passenger fares and shipping rates, travelers and shippers could respond in various ways, including by traveling less frequently or using a different, cheaper transportation mode. The effectiveness of a cap-and-trade program in balancing the benefits and costs of the emission reductions could depend on factors included in its design. Generally, by establishing an upper limit on total emissions from regulated sources, a cap-and-trade program can provide greater certainty than other policies (for example, an emissions tax) that emissions will be reduced to the desired level. Regulated sources would be required to hold allowances equal to their total emissions, regardless of the cost. However, allowance prices could be volatile, depending on factors such as changes in energy prices, available technologies, and weather, making it more expensive for sources to meet the cap. To limit price volatility, a cost- containment mechanism called a “safety valve” could be incorporated into the cap-and-trade program to establish a ceiling on the price of allowances. For example, if allowance prices rose to the safety-valve price, the government could sell regulated sources as many allowances as they would like to buy at the safety-valve price. Although the safety valve could limit price spikes, the emissions cap would be exceeded if the safety valve were triggered. In addition, the baseline that is used to project future emissions and set the emissions cap can affect the extent to which a cap-and-trade program will contain or reduce emissions. The point in time on which a baseline is set also can influence the environmental benefits of a cap-and-trade program. For example, some environmental interest groups in Europe have claimed that the environmental benefits of including aviation in the EU ETS will be minimal, since the emissions cap will be based on the mean average of aviation emissions from 2004 through 2006, leading to minimal future emissions reductions. In addition, industry groups and other experts have raised concerns that a cap-and-trade program could be administratively burdensome to the government, which would need to determine how to allocate the allowances to sources, oversee allowance trading, and monitor and enforce compliance with the program. Generally speaking, an upstream program may have lower administrative costs than a downstream program because it would likely involve fewer emissions sources. Some members of the aviation industry have said they view open and global cap-and-trade programs positively, although they report that not all types of cap-and-trade programs will work for them. For instance, ICAO and other industry organizations have said they would prefer an open cap- and-trade program (in which airlines are allowed to trade allowances with other sectors and sources) to a closed one (in which airlines are allowed to trade emissions allowances only with one another) because an open program would give airlines more flexibility in meeting their emissions cap. Staff we met with at the Association of European Airlines expressed willingness for aviation to participate in a cap-and-trade program as long as it is global in scope, is an open system, is not in addition to similar taxes, and does not double-count emissions. In addition, some industry groups and government agencies we met with said that a global program would best ensure that all airlines would take part in reducing emissions. Some countries are planning to address aviation emissions through cap- and-trade programs. The European Union originally implemented the EU ETS in 2005, covering industries representing about 50 percent of its carbon dioxide emissions. The EU is planning on including all covered flights by aircraft operators flying into or out of EU airports, starting in 2012. Please see appendix I for more details on the EU ETS, including a comprehensive discussion of the potential legal implications and stakeholders’ positions on this new framework. Other countries are considering cap-and-trade programs that would affect the aviation sector. In addition, the United States is currently considering and has previously considered cap-and-trade programs: H.R. 2454, the American Clean Energy and Security Act of 2009, 111th Cong. (2009), would create a cap-and-trade program for greenhouse gas emissions for entities responsible for 85 percent of emissions in the United States. The current language proposes to regulate producers and importers of any petroleum-based liquid fuel, including aircraft fuel, as well as other entities, and calls for an emissions cap in 2050 that would be 83 percent lower than 2005 emissions. The bill also calls for the emissions cap in 2012 to be 3 percent below 2005 levels, and in 2020 to be 20 percent below 2005 levels. In addition, the Obama Administration’s fiscal year 2010 budget calls for the implementation of a cap-and-trade program to regulate emissions in the United States. The budget calls for emissions reductions so that emissions in 2020 are 14 percent below 2005 levels and emissions in 2050 are 83 percent below 2005 levels. Additionally in this Congress, the Cap and Dividend Act, also proposes a cap-and-trade program for carbon dioxide emissions beginning in 2012, which would include jet fuel emissions. This program’s covered entities would include entities that would make the first sale in U.S. markets of oil or a derivative product used as a combustible fuel, including jet fuel. The bill would require the Secretary of the Treasury, in consultation with the EPA Administrator, to establish the program’s emission caps in accordance with the following targets: the 2012 cap would equal 2005 emissions; the 2020 cap would equal 75 percent of 2005 emissions; the 2030 cap would equal 55 percent of 2005 emissions; the 2040 cap would equal 35 percent of 2005 emissions; and the 2050 cap would equal 15 percent of 2005 emissions. A number of bills creating a cap-and-trade program also were introduced in the 110th Congress but did not pass. For example, a bill sponsored by Senators Boxer, Warner, and Lieberman would have established a cap-and- trade program that covered petroleum refiners and importers, among other entities. The costs of the regulation would have been borne by these refiners and importers who would likely have passed on those costs to airlines through increases in the price of jet fuel. An emissions tax is another market-based policy that could be used to reduce emissions from commercial aviation and other emissions sources. Under a tax on carbon dioxide (or other greenhouse gas), the government would levy a fee for every ton of carbon dioxide emitted. Similar to a cap- and-trade program, a tax would provide a price signal to commercial airlines and other emission sources, creating an economic incentive for them to reduce their emissions. A carbon tax could be applied to “upstream” sources such as fuel producers, which may in turn pass along the tax in the form of higher prices to fuel purchasers, including commercial airlines. Similar to a cap-and-trade program, emissions taxes would provide regulated sources including commercial airlines with an incentive to reduce emissions in the most cost-effective way, which might include reducing weight, consolidating flights, or using more fuel-efficient aircraft. According to economic theory, an emissions tax should be set at a level that represents the social cost of the emissions. Nonetheless, estimates of the social costs associated with greenhouse gas emissions vary. For example, IPCC reported that the social costs of damages associated with greenhouse gas emissions average about $12 per metric ton of carbon dioxide (in 2005 dollars) with a range of $3 to $95 per ton (in 2005 dollars). Economic research indicates that an emissions tax is generally a more economically efficient policy tool to address greenhouse gas emissions than other policies, including a cap-and-trade program, because it would better balance the social benefits and costs associated with the emissions reductions. In addition, compared to a cap-and-trade program, an emissions tax would provide greater certainty as to the price of emissions. However, it would in concept provide less certainty about emissions reductions because the reductions would depend on the level of the tax and how firms and consumers respond to the tax. Subsidies are another market-based instrument that could, in principle, provide incentives for sources to reduce their emissions. For example, experts we met with said that the government could use subsidies to encourage industry and others to adopt existing low-emissions technologies and improvements, such as winglets. In addition, some experts told us that NextGen-related technologies are candidates for subsidies because of the high costs of the technologies and the benefits that they will provide to the national airspace system. According to IPCC, subsidies can encourage the diffusion of new low-emissions technologies and can effectively reduce emissions. For example, as newer, more fuel- efficient engines are developed and become commercially available, subsidies or tax credits could lower their relative costs and encourage airlines to purchase them. Although subsidies are similar to taxes, economic research indicates that some subsidy programs can be economically inefficient, and need to be financed (for example, using current tax revenue or by raising taxes). For example, although some subsidy programs could lead to emissions reductions from individual sources, they may also result in an overall increase by encouraging some firms to remain in business longer than they would have under other policies such as an emissions tax. Both a cap-and-trade program and an emissions tax would impose costs on the aviation sector and other users of carbon-based fuels. The extent to which the costs associated with an emissions control program are incurred by commercial airlines and passed on will depend on a number of economic factors, such as the level of market competition and the responsiveness of passengers to changes in price. Officials of some industry organizations we met with said that because airlines are in a competitive industry with a high elasticity of demand, they are constrained in passing on their costs, and the costs to industry likely will be large. The Association of European Airlines reported that airlines will have very limited ability to pass on the costs of the EU ETS. Furthermore, the International Air Transport Association has estimated that the costs to the industry of complying with the EU ETS will be €3.5 billion in 2012, with annual costs subsequently increasing. Others we interviewed, however, stated that airlines will be able to pass on costs, and the increases in ticket prices will not be large. For example, the EU estimates that airlines will be able to pass on most of the costs of their compliance with the EU ETS, which will result in an average ticket price increase of €9 on a medium-haul flight. However, the revenue generated by the tax or by auctioning allowances could be used to lessen the overall impact on the economy, or the impact on certain groups (for example, low income) or sectors of the economy by, for example, reducing other taxes. Finally, according to some airline industry representatives, a program to control greenhouse gas emissions would add to the financial burden the aviation industry and its consumers already face with respect to other taxes and fees. For example, passenger tickets in the United States are subject to a federal passenger ticket tax of 7.5 percent, a segment charge of $3.40 per flight segment, and fees for security and airport facilities (up to $4.50 per airport). In addition, international flights are subject to departure taxes and customs-related fees. However, none of these taxes and fees attempt to account for the cost of greenhouse gas emissions, as a tax or cap-and-trade program would do. In addition, the revenue generated from an emissions tax or by auctioning allowances under a cap-and-trade program, could be used to offset other taxes, thereby lessening the economic impact of the program. Mandating the use of certain technologies or placing emissions limits on aircraft and aircraft engines are also potential options for governments to address aircraft emissions. Standards include both technology standards, which mandate a specific control technology such as a particular fuel- efficient engine, and performance standards, which may require polluters to meet an emissions standard using any available method. The flexibility in the performance standards reduces the cost of compliance compared with technology-based standards and, according to DOT, avoids potential aviation safety implications that may occur from forcing a specific technology across a wide range of operations and conditions. For example, by placing a strict limit on aircraft emissions, a standard would limit the emissions levels from an engine or aircraft. Regulations on specific emissions have been used to achieve specific environmental goals. ICAO’s nitrogen oxide standards place limits on nitrogen oxide emissions from newly certified aircraft engines. These standards were first adopted in 1981 and became effective in 1986. Although no government has yet promulgated standards on aircraft carbon dioxide emissions or fuel economy, emissions standards are being discussed within ICAO’s Committee on Aviation Environmental Protection and, in December 2007, a number of environmental interest groups filed petitions with EPA asking the agency to promulgate regulations for greenhouse gas emissions from aircraft and aircraft engines. In addition, the American Clean Energy and Security Act of 2009 would require EPA to issue standards for greenhouse gas emissions from new aircraft and new engines used in aircraft by December 31, 2012. Although standards can be used to limit greenhouse gas emissions levels from aircraft, economic research indicates that they generally are not as economically efficient as market-based instruments because they do not effectively balance the benefits and costs associated with the emissions reductions. For example, unlike market-based instruments, technology standards would give engine manufacturers little choice about how to reduce emissions and may not encourage them to find cost effective ways of controlling emissions. In addition, according to IPCC, because technology standards may require emissions to be reduced in specified ways, they may not provide the flexibility to encourage industry to search for other options for reducing emissions. However, according to EPA, performance standards to address certain emissions from airlines, such as those adopted by ICAO and EPA, gave manufacturers flexibility in deciding which technologies to use to reduce emissions. Nonetheless, although performance standards can provide greater flexibility and therefore be more cost-effective than technology standards, economic research indicates that standards generally provide sources with fewer incentives to reduce emissions beyond what is required for compliance, compared to market-based approaches. Moreover, standards typically apply to new, rather than existing, engines or aircraft, making new engines or aircraft more expensive, and as a result, the higher costs may delay purchases of more fuel-efficient aircraft and engines. Current international aviation standards also may require international cooperation. Because ICAO sets standards for all international aviation issues, it may be difficult for the U.S. government, or any national government, to set a standard that is not adopted by ICAO, although member states are allowed to do so. Industry groups we met with said that any standards should be set through ICAO and then adopted by the United States and other nations and, as mentioned earlier, some environmental groups have petitioned EPA to set such standards. Government-sponsored research into low-fuel consumption and low- emissions technologies can help foster the development of such technologies, particularly in combination with a tax or a cap-and-trade program. Experts we surveyed said that increased government research and development could be used to encourage a number of low-emissions technologies, including open rotor engines and blended wing-body aircraft. According to the Final Report of the Commission on the Future of the United States Aerospace Industry, issued in 2002, the lack of long- term investments in aerospace research is inhibiting innovation in the industry and economic growth. This study also asserted that national research and development on aircraft emissions is small when compared with the magnitude of the problem and the potential payoffs that research drives. Experts we met with said that government sponsorship is crucial, especially for long-term fundamental research, because private companies may not have a sufficiently long-term perspective to engage in research that will result in products for multiple decades into the future. According to one expert we interviewed, the return on investment is too far off into the future to make it worthwhile for private companies. NASA officials said that private industry generally focuses only on what NASA deems the “next generation conventional tube and wing technologies,” which are usually projected no more than 20 years into the future. Furthermore, raising fuel prices or placing a price on emissions through a tax or cap- and-trade program is likely to encourage greater research by both the public and private sectors into low-emissions technologies because it increases the pay off associated with developing such technologies. Various U.S. federal agencies, including NASA and FAA, have long been involved in research involving low-emissions technologies. For example, NASA’s subsonic fixed-wing research program is devoted to the development of technologies that increase aircraft performance, as well as reduce both noise levels and fuel burn. Through this program, NASA is researching a number of different technologies to achieve those goals, including propulsion, lightweight materials, and drag reduction. The subsonic fixed-wing program is looking to develop three generations of aircraft with increasing degrees in technology development and fuel burn improvements—the next-generation conventional tube and wing aircraft, the unconventional hybrid wing-body aircraft, and advanced aircraft concepts. NASA follows goals set by the National Plan for Aeronautics Research and Development and Related Infrastructure for fuel efficiency improvements for each of these generations (see table 6). However, budget issues may affect NASA’s research schedule. As we have reported, NASA’s budget for aeronautics research was cut by about half in the decade leading up to fiscal year 2007, when the budget was $717 million. Furthermore, NASA’s proposed fiscal year 2010 budget calls for significant cuts in aeronautics research, with a budget of $569 million. As NASA’s aeronautics budget has declined, it has focused more on fundamental research and less on demonstration work. However, as we have reported, NASA and other officials and experts agree that federal research and development efforts are an effective means of achieving emissions reductions in the longer term. According to NASA officials, the research budget for its subsonic fixed-wing research program, much of which is devoted to technologies to reduce emissions and improve fuel efficiency, will be about $69 million in 2009. FAA has proposed creating a new research consortium to focus on emissions and other issues. Specifically, FAA has proposed the Consortium for Lower Energy, Emissions, and Noise, which would fund, on a 50-50 cost share basis with private partners, research and advanced development into low-emissions and low-noise technologies, including alternative fuels, over 5 years. FAA plans that the consortium will mature technologies to levels that facilitate uptake by the aviation industry. The consortium contributes to the goal set by the National Plan for Aeronautics, Research and Development and Related Infrastructure to reduce fuel burn by 33 percent compared with current technologies. The House FAA Reauthorization Bill (H.R. 915, 111th Cong. (2009)) would provide up to $108 million in funding for the consortium for fiscal years 2010 through 2012. Lastly, the EU has two major efforts dedicated to reducing aviation emissions. The Advisory Council for Aeronautics Research in Europe (ACARE) is a collaborative group of governments and manufacturers committed to conducting strategic aeronautics research in Europe. According to officials with the European Commission Directorate General of Research, about €150 million to €200 million per year is devoted to basic research through ACARE. Another research effort in Europe is the Clean Sky Joint Technology Initiative, which will provide €1.6 billion over 7 years to fund various demonstration technologies. We provided a draft copy of this report to the Department of Defense, the Department of State, the Department of Transportation, the National Aeronautics and Space Administration, and the Environmental Protection Agency for their review. The Department of Defense had no comments. The Department of State provided comments via email. These comments were technical in nature and we incorporated them as appropriate. The Department of Transportation provided comments via email. Most of these comments were technical in nature and we incorporated them as appropriate. In addition, DOT stated that our statements indicating that the use of future technological and operational improvements may not be enough to offset expected emissions growth is not accurate given the potential adoption of alternative fuels. We agree that alternative fuels do have potential to reduce aircraft emissions in the future; to the extent that a low-emission (on a life-cycle basis) alternative fuel is available in substantial quantities for the aviation industry, emissions from the aviation industry are likely to be less than they otherwise would be. However, we maintain that given concerns over the potential environmental impacts of alternative fuels, including their life-cycle emissions, as well as the extent to which such fuels are available in adequate supplies at a competitive price, there may be a somewhat limited potential for alternative fuel use to reduce emissions from commercial aircraft in the future, especially the short term. DOT also suggested that we clarify the sources for our discussion about policy options that can be used to address aviation emissions. As much of that discussion is based on economic research and experience with market-based instruments and other policies, we clarified our sources where appropriate. NASA provided a written response (see app. V) in which they stated that our draft provided an accurate and balanced view of issues relating to aviation and climate change. NASA also provided technical comments that were incorporated as appropriate. EPA provided technical comments via email that were incorporated as appropriate and also provided a written response. (see app. VI). EPA was concerned that characterizing aircraft emissions standards as being economically inefficient especially compared to market-based measures, might lead readers to believe that emissions standards cannot be designed in a manner that fosters technological innovations and economic efficiency. EPA officials explained that, based on their experience, standards can be designed to optimize technical responses, provide regulated entities with flexibility for compliance and that studies show that EPA regulations have generated benefits in excess of costs. We agree that allowing regulated sources more flexibility in how they meet emissions standards can reduce the costs associated with achieving the emissions reductions. However, economic research indicates that for addressing greenhouse gas emissions, market-based measures such as emissions taxes or cap-and-trade programs would be economically efficient (that is, would maximize net benefits) compared to other approaches, in part because market-based measures can give firms and consumers more flexibility to decide when and how to reduce their emissions. Emissions standards, for example, generally give regulated sources fewer incentives to reduce emissions beyond what is required for compliance. The ultimate choice of what specific policy option or combination of options governments might use and how it should be designed is a complex decision and beyond the scope of our discussion. Finally, EPA was concerned that our draft report did not adequately discuss the increases in fuel consumption and emissions that have resulted from high rates of market growth and expected continued growth. We believe that our report adequately discusses fuel efficiency as well as fuel consumption and emissions output. In addition, our report discusses that aviation emissions are expected to grow in the long term, despite the potential availability of a number of technological and operational options that can help increase fuel efficiency. In response to this comment, we added additional information on forecasted fuel use by U.S.-based commercial airlines. We are sending copies of this report to the Secretaries of Defense, State, and Transportation and the Administrators of the Environmental Protection Agency and the National Aeronautics and Space Administration. This report is also available at no charge on the GAO Web site at http://www.gao.gov. The European Union’s recent decision to include aviation in the European Union’s Emissions Trading Scheme (EU ETS), which includes U.S. carriers flying in and out of Europe, is a complex and controversial matter. Preparations by U.S. carriers are already underway for 2012, the first year aircraft operators will be included in the ETS. The inclusion of aviation in the current EU ETS implicates a number of international treaties and agreements and has raised concerns among stakeholders both within and outside the United States. Many stakeholders within the United States have posed that the inclusion of aviation in the ETS violates provisions of these international agreements and is contrary to international resolutions. Others, primarily in Europe, disagree and find aviation’s inclusion in the current ETS to be well within the authority set forth in these agreements. In light of these disagreements, the EU may confront a number of hurdles in attempting to include U.S. carriers in the current EU ETS framework. In 2005, the EU implemented its ETS, a cap-and-trade program to control carbon dioxide emissions from various energy and industrial sectors. On December 20, 2006, the European Commission set forth a legislative proposal to amend the law, or directive, which established the ETS so as to include aviation in the ETS. On July 8, 2008, the European Parliament adopted the legislative resolution of the European Council and on October, 24, 2008, the Council adopted the directive, signaling its final approval. The directive was published in the Official Journal on January 13, 2009, and became effective on February 2, 2009. Under the amended ETS Directive, beginning on January 1, 2012, a cap will be placed on total carbon dioxide emissions from all covered flights by aircraft operators flying into or out of an EU airport. Emissions will be calculated for the entire flight. For 2012, the cap for all carbon dioxide emissions from covered flights will be set at 97 percent of historical aviation emissions. For the 2013-2020 trading period and subsequent trading periods, the cap will be set to reflect annual emissions equal to 95 percent of historical aviation emissions. The cap represents the total quantity of emissions allowances available for distribution to aircraft operators. In 2012 and each subsequent trading period, 15 percent of allowances must be auctioned to aircraft operators; the remaining allowances will be distributed to these aircraft operators for free based on a benchmarking process. Individual member states, in accordance with the EU regulation, will conduct the auctions for aircraft operators assigned to that member state. The auction of allowances will be open for anyone to participate. The number of allowances each member state has to auction depends on its proportionate share of the total verified aviation emissions for all member states for a certain year. The member states will be able to use the revenues raised from auctions in accordance with the amended directive. For each trading period, aircraft operators can apply to their assigned member state to receive free allowances. Member states will allocate the free allowances in accordance with a process the European Commission establishes for each trading period. After the conclusion of each calendar year, aircraft operators must surrender to their assigned member state a number of allowances equal to their total emissions in that year. If an aircraft operator’s emissions exceed the number of free allowances it receives, it will be required to purchase additional allowances at auction or on the trading market for EU ETS allowances. In addition, in 2012, aircraft operators will be able to submit certified emissions reductions (CER) and emission reduction units (ERU)—from projects in other countries undertaken pursuant to the Kyoto Protocol’s Clean Development Mechanism and Joint Implementation—to cover up to 15 percent of their emissions in lieu of ETS allowances. For subsequent trading periods, aircraft operators’ use of CERs and ERUs depends in part on whether a new international agreement on climate change is adopted. However, regardless of whether such an agreement is reached, in the 2013 through 2020 trading period, each aircraft operator will be allowed to use CERs and ERUs to cover at least 1.5 percent of their emissions. If a country not participating in the EU ETS adopts measures for reducing the climate change impact of flights to participating countries, then the European Commission, in consultation with that country, will consider options to provide for “optimal interaction” between the ETS and that country’s regulatory scheme—for example, the Commission may consider excluding from the ETS flights to participating EU ETS countries from that country. Although 2012 is the first year aircraft operators must comply with the ETS law, preparations in the EU and from U.S. carriers began soon after the law went into force. The inclusion of aviation in the newly amended EU ETS implicates a number of international agreements, policy statements, and a bilateral agreement specific to the United States, including the United Nations Framework Convention on Climate Change (UNFCCC), the Kyoto Protocol to the UNFCCC, the Convention on International Civil Aviation (the ‘Chicago Convention’), Resolutions of the International Civil Aviation Organization, and the U.S.-EU Air Transport Agreement (the ‘U.S.-EU Open Skies Agreement’). The UNFCCC, a multilateral treaty on global warming that was signed in 1992 and has been ratified by 192 countries, including the United States, seeks to “achieve stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.” Although the UNFCCC required signatory states to formulate a national response to climate change, its mitigation provisions did not require mandatory national emissions targets. In order to strengthen the commitments articulated in the UNFCCC, the Kyoto Protocol was developed within the UNFCCC’s framework and adopted in 1997. The Protocol entered into force in February 2005. The Kyoto Protocol established binding greenhouse gas emissions targets for a number of industrialized nations and the European Economic Community (EEC). Notably, the agreement required these industrialized nations and the EEC to pursue “limitations or reduction of emissions of greenhouse gases … from aviation … working through the International Civil Aviation Organization.” As of January 2009, 183 countries had ratified the Kyoto Protocol, but not the United States. Further, the Convention on International Civil Aviation, commonly known as the Chicago Convention, signed on December 7, 1944, sets forth rules on airspace, issues of sovereignty, aircraft licensing and registration, and general international standards and procedures, among others. Notably, the treaty sets forth sovereignty provisions, recognizing that a contracting state has exclusive sovereignty over airspace above its own territory. Provisions potentially applicable to the recent amendment incorporating aviation into the ETS include Article 11, Article 12, Article 15, and Article 24. Established by the Chicago Convention in 1944, the International Civil Aviation Organization (ICAO) is an agency of the United Nations and is tasked with fostering the planning and development of international aviation. ICAO has issued a number of Assembly Resolutions, which are statements of policy rather than law, including a nonbinding ICAO Resolution A36-22 relating to environmental protection and aviation emissions. This resolution, which supersedes ICAO Resolution A35-5 which had endorsed the further development of an open emissions trading scheme for international aviation, calls for mutual agreement between contracting states before implementation of an emissions trading scheme. Additionally, the Resolution formed a new Group on International Aviation and Climate Change (GIACC) that was tasked with developing and recommending to the ICAO Council a program of action to address international aviation and climate change. GIACC is due to report to the Council later this year. Finally, the U.S.-EU Air Transport Agreement, signed on April 25 and 30, 2007, and provisionally applied as of March 30, 2008, provided greater flexibility for flights between the United States and the EU, authorizing every U.S. and every EU airline to: operate without restriction on the number of flights, aircraft, and routes, set fares according to market demand; and enter into cooperative arrangements, including codesharing, franchising, and leasing. It includes enhanced opportunities for EU investment in carriers from almost 30 non-EU countries, and enhanced regulatory cooperation in regard to competition law, government subsidies, the environment, consumer protection, and security. Among the provisions potentially applicable to the newly amended EU ETS is Articles 12 relating to charges for use of airports and related facilities and services and Article 3 which prohibits a party from unilaterally limiting service or aircraft type. Although a number of international agreements, policy statements, and bilateral agreements are in place currently, climate change policies are constantly changing. In December 2009, the Conference of the Parties to the UNFCCC will meet in Copenhagen to discuss and negotiate an “agreed outcome” in order to implement the UNFCCC “up to and beyond 2012.” A number of stakeholders have expressed concern as to the legal basis for aviation’s inclusion in the EU ETS. In the United States, within the EU community, and in countries throughout the world, public and private entities, as well as legal scholars, have expressed opinions as to whether the inclusion of aviation into the ETS is in compliance with international law. Stakeholders within the United States, such as the executive branch, members of Congress, and the Air Transport Association (ATA), have weighed in on the legality of the newly amended EU ETS which requires compliance by U.S. carriers. In 2007 and 2008, the executive branch expressed the view that the imposition of the ETS was inconsistent with international law, specifically, the Chicago Convention and the U.S.-EU Air Transport Agreement. While the executive branch has not articulated a position on this issue since mid- 2008, it has expressed the importance of climate change and developing a solution on a global level. The Air Transport Association (ATA), a trade association representing principle U.S. airlines, also has concluded that the EU ETS’s inclusion of aviation violates international law, specifically the Chicago Convention. ATA argues that imposition of the ETS on U.S.-based carriers is contrary to Articles 1, 12, 11, 15 and potentially, in the alternative, Article 24. In summary, ATA argues that the ETS, as amended, violates Article 1 and Article 12 provisions of sovereignty and authority. Article 1, which provides contracting states exclusive sovereignty over their airspace, is violated by the EU’s extraterritorial reach which covers emissions of non- EU airlines in another states’ airspace. Further, Article 12, which requires contracting states to ensure that aircraft under its jurisdiction are in compliance with rules and regulations relating to the flight and maneuver of aircraft, also is violated. ATA argues that Article 12 gives ICAO primary authority, under the Convention, to set rules for the “flight and maneuver of aircraft” over the “high seas,” which precludes the applicatio of rules by one state over the airlines of another state to the extent inconsistent with ICAO rules. Thus, because ICAO has stated that one state can apply emissions trading to the airlines of another state only through mutual consent, ATA contends that the EU’s emissions trading nt coverage of non-agreeing-EU airlines over the high seas is inconsiste with ICAO’ s authority. Additionally, with respect to Article 11, ATA argues that although Article 11 provides authority to states to establish certain rules for admission and departure of aircraft, the authority is limited. States may only establish admission and departure rules consistent with the remainder of the Chicago Convention, which prevents the EU from arguing that Article 11 authorizes EU action. In any event, ATA contends that any rules may only apply “upon entering or departing from or while within the territory of that State,” whereas the European scheme reaches outside European territory. Further, ATA finds that the ETS is contrary to Article 15 of the Chicago Convention because it imposes a de facto charge for the right to enter or exit an EU member state. In the alternative, ATA argues that there could be a violation of Article 24 of the Convention, which exempts fuel on board an aircraft from duties, fees, and charges. Because the law calculates emissions based on fuel consumption, the purchase of greenhouse gas permits may constitute a “similar … charge” of fuel on board, according to ATA. Additionally, Article 24 mirrors Article 11 of the U.S.-EU Air Transport Agreement but extends the freedom from taxation/charges on fuel to that purchased in the EU. Thus, ATA argues, the prohibition against the EU levying a fuel tax applies to fuel already on board as well as fuel purchased in the EU. ATA has publicly expressed harsh opposition to the ETS’s inclusion of aviation and has stated that there will be a number of legal challenges from around the globe, including from the United States. ATA has additionally expressed discontent with the newly amended ETS law as a matter of policy, as it siphons money out of aviation which is counterproductive from reinvesting in improving technologies that reduce emissions. Finally, the Congress is considering the House FAA Reauthorization Bill, H.R. 915, 111th Cong. (2009), which includes an expression of the Sense of the Congress with respect to the newly amended EU ETS. The bill states that the EU’s imposition of the ETS, without working through ICAO, is inconsistent with the Chicago Convention, other relevant air service agreements, and “antithetical to building cooperation to address effectively the problem of greenhouse gas emissions by aircraft engaged in international civil aviation.” The bill recommends working through ICAO to address these issues. Stakeholders in the EU community and a not-for-profit business organization have expressed both legal and policy views on the newly amended ETS, as well. An independent contractor for the European Commission’s Directorate-General of the Environment (DG Environment) as well as the International Emissions Trading Association (IETA) have both issued opinions in support of aviation’s inclusion in the ETS. IETA supports the inclusion of aviation in the EU ETS from a policy perspective, but has not opined on the legality of its inclusion. From a policy standpoint, IETA supports aviation’s inclusion on both EU and non-EU carriers so as to share the burden to combat climate change. However, the organization has expressed concerns over a number of issues, some of which include access to project credits, amount of allowances available for auctioning, and allocation calculation. Id. at 170-73. consequently, Article 15 is inapplicable. Finally, Article 24 of the Convention does not apply to the Emissions Trading System because trading allowances are “fundamentally different from customs duties.” Additionally, the opinion finds policy support for these legal findings in ICAO Resolution A35-5 and bilateral air transport agreements. Additionally, countries outside the European Community have joined the United States in an expression of concerns regarding the imposition of the r ETS on non-EU carriers. In an April 2007 letter to the German Ambassado to the European Union, the United States, Australia, China, Japan, South Korea, and Canada conveyed a “deep concern and strong dissatisfactio for the then-proposal to include international civil aviation within the scope of the EU ETS. The letter asks that the EU ETS not include no aircraft unless done by mutual consent. Although supportive of the reduction of greenhouse gas emissions, the ascribing parties argue that th “unilateral” imposition of the ETS on non-EU carriers would potentially violate the Chicago Convention and bilateral aviation agreements with the parties to the letter. Moreover, they write, the proposal runs cou the international consensus that ICAO should handle matters of international aviation, which was articulated with the ICAO Assembly and the ICAO Council in 2004 and 2006, respectively. The letter closes with a n” nter to reservation of right to take appropriate measures under international law if the ETS is imposed. Given the controversial nature and complexity of aviation’s inclu the EU ETS, a number of scholars in the legal community, both within the United States and the EU, have provided explanatory articles or position papers on the issue of the consistency of the EU’s plans with its international legal obligations. One U.S. law review article by Daniel B. Reagan argues that international aviation emission reductions should be pursued through ICAO given the “political, technical, and legal implications raised by the regulation.” This article sets forth that politically, ICAO is the appropriate body because it can work towards uniformity in a complex regulatory arena, incidentally resulting in increased participation from a variety of stakeholders, reduction of resentment, and a reduced likelihood of non-compliance and legal challenges. Further, ICAO has the expertise necessary to technica design aviation’s emission reduction regime and is in a position to consider the “economic, political, and technical circumstances of its member states … .” Finally, Reagan argues that pursuing an emissions reduction regime through ICAO could avoid likely legal challenges which present themselves under the current ETS, as ICAO could facilitate a common understanding of contentious provisions. In conclusion, he proposes that the EU should channel the energy for implementation of the current regime into holding ICAO accountable for fulfilling environmental duties. In contrast, a law review article published in the European Environmen Law Review in 2007 by Gisbert Schwarze argues that bringing aviation in the EU ETS falls clearly within existing law and is, in fact, mandated. article presents the case that neither existing traffic rights in member states, bilateral air transport agreements, nor the Chicago Convention pose any legal obstacles. He argues, in fact, that the EU has a mandate under the UNFCCC and the Kyoto Protocol to implement climate change policies which include aviation. First, the article sets forth that the inclusion of aviation does not restrict existing traffic rights or allow or disallow certain aircraft operations in different member states, but ra merely brings the amount of emissions into the decision-making process. Further, Schwarze explains that imposing the ETS on carriers flying in and out of the EU is well within the Chicago Convention. Article 1 of the Convention provides contracting states exclusive sovereignty over their airspace which provides the EU with the authority to impose obligations relating to arrival and departures, so long as there is no discrimination on the basis of nationality, as required by Article 11. Additionally, the article sets forth that Article 12, regarding the flight and maneuver of aircraft, is not applicable because, as argued above, the ETS does not regulate certain aircraft operations. Article 15, which covers charges, is similarly inapplicable because emissions allowances on the free market or through the auctioning process do not constitute a charge. Finally, Article 24 is inapposite as well because the emissions trading system does not constitute a customs duty. Additionally, Schwarze argues that the bilateral air transport agreements with various nations, such as the Open Skies Agreement with the Unite d States, do not pose any legal barriers to inclusion of aviation in ETS. Id. at 12-13. These agreements contain a prohibition of discrimination similar to Art 11 of the Chicago Convention and a fair competition clause which require fair competition among signatories in international aviation as well as prohibits a party from unilaterally limiting traffic. The article argues so long as the ETS operates without discrimination, it is in conformity with the principle of a sound and economic operation of air services and therefore satisfies the fairness clause. Finally, since the ETS gives only incentive to reduce emissions, it does not regulate the amount of air traffic. Finally, Schwarze argues that not only is the inclusion of aviation into the EU ETS legally sound, the UNFCCC and Kyoto Protocol mandate its inclusion. The UNFCCC requires all parties to the treaty to adopt national policies and take corresponding measures on the mitigation of climate change consistent with the objective of the convention, recognizing that this can be done “jointly with other parties.” Additionally, the Kyoto Protocol, which sought to strengthen UNFCCC, required Annex 1 parties to pursue “limitations or reduction of emissions of greenhouse gases … from aviation … working through the International Civil Aviation Organization.” And finally, although not legally binding, ICAO Resolution A35-5 endorses the development of an open emissions trading system for international aviation. A re actions under international law if the ETS is imposed. If challenges are brought forth, they could potentially be brought forth under the Chicag o Convention, air service agreements (e.g., U.S.- EU Air Transport Agreement) or potentially in individual member state courts. Each o has its own dispute resolution procedure. If a challenge is brought forth under the Chicago Convention after fai negotiations, Article 84 of t he Convention (Settlement of Disputes) is invoked. Article 84 provides that if there is a disagreement by two or more contracting states which cannot be settled by negotiation, it will be decided upon by the Council. A decision by the Council can be appealed to an agreed-upon ad hoc tribunal or to the Permanent Court of International Justice (now the International Court of Justice) whose decision will be binding. Air service agreements additionally have dispute resolution procedures and the U.S.-EU Air Transport Agreement is no exception. Article 19 of the U.S.-EU Air Transport Agreement provides that parties to a dispute may submit to binding arbitration through an ad hoc tribunal if negotiations fail. If there is noncompliance with the tribunal’s decision and a subsequent agreement between the parties is not reached within 40 the other party may suspend the application of comparable ben arise under the agreement. The survey tool used to assess options for reducing commercial aircraft emissions is below, complete with detailed results. We do not include the responses for open-ended questions. Instructions for Completing This Tool You can answer most of the questions by checking boxes or filling in blanks. A few questions request short narrative answers. Please note that these blanks will expand to fit your answer. ease use your mouse to na Pl on the field or checkin or “Enter” keys, bec g the box you ish to fill in. ause doing so ay cause for m ut the do D ma cum ing o not use the “Tab tting prob ms. le ” To select a box, click on it once; to d select a box, double click on it. If you prefer, you may pr fax. Please use ext questions. We ask that you complet Rosenberg your desktop or hard drive and by January 9, 2009. Plea save the com leted document to e-mail it as an attachment to RosenbergMC@gao.gov. If you complete this tool by hand, please fax the completed tool to Matthew Rosenberg at GAO at 312-220-7726. If you have any quest Analyst, at 3 Assistant Director, at 312 enior 12-220-7645 or RosenbergMC@gao.gov or Cathy Colwell, ions, please co act Matthew senberg v. help. 1. How would you rate y our overall knowledge of technological options to reduc as aircraft costs of e aircraft carbon dioxide ( O) emissions, such engines and aircraft design technologies, and the those technologies? SKIP TO QUESTION #9 0 None SKIP TO QUESTION #9 6 Minimal 4 Basic CONTINUE TO QUESTION #2 1 Proficient CONTINUE TO QUESTION #2 TO QUESTION #2 7 Advanced CONTINUE2. In your expert opinio n, what is the potential for futur fuel savings and CO3. In your expert opinion, what would be the potential R&D costs to develop the following options for commercial use? a. Open rotor engines b. Geared Turbo Fan (Composites) e. 4. Given your answer to question two, what would be the potential costs to the air transport industry to procure, operate and maintain the following options to achieve those fuel savings an CO emissions reductions? Medium costs High costs Don’t know a. Open rotor engines b. Geared Turbo Fan ) e. 5. In your expert opinion, what is the level of public acceptance for the following conceptual options? a. Open rotor engines b. Geared Turbo Fan (Composites) e. 6. In your expert opinion, given our best knowledge about future market conditions, and abt government intervention, how sen long would it take for technolo the private sector to adopt these gies? timeframe (< 5 years) Medium timeframe (5 - 15 years) Long timeframe (> 15 a. Open rotor engines b. Geared Turbo Fan (Composites) e. a. Open rotor engines b. Geared Turbo Fan Engines c. d. (Composites) e. f. 9. How would you rate your overall knowledge of operational options to reduce aircraft fuel usage and CO? 11. In your expert opinion, what would be the potential R&D costs to develop the following options for commercial use? 12. Given your answer to question ten, what would be the potentia costs to the air transport in options to achieve those fuel savings and CO emissions reductions? dustry to adopt the following 13. In your expert opinion, what is the level of public acceptance for the following options? Limited use of paint on airframes Use of APU on ground at gate Automatic Dependent Surveillance – Broadcast (ADS-B) Required Navigation Performance (RNP) Continuous Descent Arrivals (CDA) 14. In your expert opinion, given our best knowledge about future market conditions, and absent government intervention, how long would it take for the priva e sector to adopt these t technologies? S timefr (< 5 years) Medium meframe ti (5 -15 years) (> 15 years) Never a. Reduction of on-board c. Limited use of paint on d. f. Use of APU on ground Surveillance – Broadcast (ADS-B) Performance (RNP) a. b. c. d. e. f. Use of APU on ground at gate g. Automatic Dependent Surveillance – Broadcast (ADS-B) (RNP) i. Continuous Descent Arrivals (CDA ) 17. How would you rate your overall knowledge of alternative fuel options to reduce aircraft COemissions, such as biofu els? 1 None 7 Minimal 2 Basic 8 3 Proficient CON 8 5 Advanced CONTINUE TO QUESTION #18 SKIP TO QUESTION #25 SKIP TO QU CONTINUE TO QU TION #1ES TINUE TO QU TION #1ES 18. In your exp ert opinion, compared to jet fuel currently in use, what is the potentia a life-cycle ba l for future reduction of sis) for the following options? CO 19. In your expert opinion, what would be the potential R&D costs to develop the following options for commercial use? a. Coal to liquid f. Hydrotreated Palm and Soy i. 20. In your expert opinion, what is the level of p for the following options? a. Coal to liquid f. Hydrotreated Palm and Soy i. 21. In your expert opinion, given our best knowledge about fu market conditions, and absent government inter long would it take for the private sector to adopt these technologies? ti (<10years) Medium timeframe (10-20 years) (> 20 years) a. Coal to li i. a. Coal to liquid f. Hydrotreated Palm and Soy Oils i. 24. What other government actions, if any, should be undertaken address greenhouse gas emissions from commercial aircraft ? 25. Do you have any other comments about anything covered in th rating tool? If so, please comment here. To address our objectives, we interviewed selected officials knowledgeable about the aviation industry, the industry’s impact on the production of greenhouse gas and other emissions that have an impact on the climate, and options for reducing these emissions. We interviewed federal officials from the Environmental Protection Agency (EPA), FAA, the National Aeronautics and Space Administration (NASA) and the Departments of Defense and State. We also met with representatives of ICAO—a United Nations agency. We interviewed representatives of industry groups, environmental groups, airlines, aircraft manufacturers, aircraft engine manufacturers, alternative fuels manufacturers, economists, and academics. We interviewed officials based in the United States and abroad. We interviewed representatives of the EU and associations about the EU ETS. We completed a literature search and reviewed relevant documentation, studies, and articles related to our objectives. To specifically address commercial aviation’s contribution to emissions, we asked our interviewees to identify the primary studies that estimate current and future emissions. As a result, we reviewed and summarized the findings of the 1999 International Panel of Climate Change Aviation and the Environment report and its 2007 Fourth Assessment Report, which were most frequently named as the most authoritative sources on global aviation emissions. To specifically address technological and operational options to reduce commercial aviation’s contribution to greenhouse gases and other emissions that can have an impact on the climate, we contracted with the National Academy of Sciences to identify and recruit experts in aviation and environmental issues. We interviewed 18 experts identified by the Academy, including those with expertise in aeronautics, air traffic management, atmospheric science, chemistry, climate change modeling, economics, environmental science, and transportation policy. In conducting these interviews, we used a standardized interview guide to obtain consistent answers from our experts and had the interviews recorded and transcribed. Based on these interviews, we assembled a list of options for reducing aviation emissions, and we asked our experts to assess these options on several dimensions. We provided each of our experts with a standardized assessment tool that instructed the experts to assess the potential of each technological and operational option on the following dimensions: potential fuel savings and emissions reductions, potential research and development costs, potential cost to the airline industry, potential for public acceptance, and time frames for adoption. For each dimension, we asked the experts to assess each option on a three-point scale. For example, we asked the experts to rate each option as having “low potential”, “medium potential”, or “high potential” for fuel savings and carbon dioxide emissions reductions. We directed the e not to answer questions about areas in which they did not have specific knowledge or expertise. As a result, throughout our report, the number o expert responses discussed for each emissions reduction option is sma than 18, the number of experts we interviewed. Besides asking the expert rcraft and to assess the potential of technological options, such as new ai ls engine designs, we asked them to assess the potential of alternative fue to reduce carbon dioxide emissions. Furthermore, for operational opti ons we asked the experts to assess included options that the federal government must implement, such as air traffic management improvements, as well as options that the airlines can exercise to reduc fuel burn. We analyzed and summarized the experts’ responses in order to identify those technological and operational options that the experts collectively identified as holding the most promise for reducing emissions. To analyze the results, for each option and dimension, we counted the numbers of experts that selected the “low,” “ responses. We then determined an overall, or group, answer for each q uestion based on the response the experts most commonly selected for each option and dimension. However, if approximately the same number o the group answer. For example, rather than reporting that the experts rated a particular option as having “high” potential, we instead reported that they rated it as having “medium-high” potential if approximately the same number of experts selected the “high” response as selected the “medium” response. Finally, if approximately the same number of experts medium,” and “high” f experts selected a second response, then we chose both responses as selected all responses, then we determined that there was no consensus on that question and reported the result as such. In order to determine government options for reducing aviation emissions, we interviewed relevant experts, including those 18 recruited by the National Academy of Sciences, about the potential use and the costs and benefits of these options. We asked our interviewees to provide opinions and information on a variety of governmental options, including carbon taxes, cap-and-trade programs, aircraft and engine standards, government- sponsored research, and governmental subsidies. We looked at governmental actions that have been taken in the past and at those that have been proposed. We reviewed economic research on the economic impact of policy options for addressing greenhouse gas emissions. Our review focused on whether policy options could achieve emissions reductions from global sources in an economically efficient manner (for example, maximize net benefits). We interviewed EU officials to understand how the EU ETS will work and to determine issues related to this scheme, which is slated to include certain flights into and out of EU airports starting in 2012. Additionally, we reviewed and summarized the EU ETS and the legal implications of the scheme (see app. I). In addition to the contact above, Cathy Colwell and Faye Morrison (Assistant Directors), Lauren Calhoun, Kate Cardamone, Brad Dubbs, Elizabeth Eisenstadt, Tim Guinane, Michael Hix, Sara Ann Moessbauer, Josh Ormond, Tim Persons (Chief Scientist), Matthew Rosenberg, and Amy Rosewarne made key contributions to this report.
Aircraft emit greenhouse gases and other emissions, contributing to increasing concentrations of such gases in the atmosphere. Many scientists and the Intergovernmental Panel on Climate Change (IPCC)--a United Nations organization that assesses scientific, technical, and economic information on climate change--believe these gases may negatively affect the earth's climate. Given forecasts of growth in aviation emissions, some governments are taking steps to reduce emissions. In response to a congressional request, GAO reviewed (1) estimates of aviation's current and future contribution to greenhouse gas and other emissions that may affect climate change; (2) existing and potential technological and operational improvements that can reduce aircraft emissions; and (3) policy options for governments to help address commercial aircraft emissions. GAO conducted a literature review; interviewed representatives of government agencies, industry and environmental organizations, airlines, and manufacturers, and interviewed and surveyed 18 experts in economics and aviation on improvements for reducing emissions from aircraft. GAO is not making recommendations. Relevant agencies provided technical comments which we incorporated as appropriate and EPA said emissions standards can have a positive benefit to cost ratio and be an important part of policy options to control emissions. According to IPCC, aviation currently accounts for about 2 percent of human-generated global carbon dioxide emissions, the most significant greenhouse gas--and about 3 percent of the potential warming effect of global emissions that can affect the earth's climate, including carbon dioxide. IPCC's medium-range estimate forecasts that by 2050 the global aviation industry, including aircraft emissions, will emit about 3 percent of global carbon dioxide emissions and about 5 percent of the potential warming effect of all global human-generated emissions. Gross domestic product growth is the primary driver in IPCC's forecasts. IPCC also made other assumptions about future aircraft fuel efficiency, improvements in air traffic management, and airport and runway capacity. IPCC's 2050 forecasts for aviation's contribution to global emissions assumed that emissions from other sectors will continue to grow. If other sectors make progress in reducing emissions and aviation emissions continue to grow, aviation's relative contribution may be greater than IPCC estimated; on the other hand, if other sectors do not make progress, aviation's relative contribution may be smaller than estimated. While airlines currently rely on a range of improvements, such as fuel-efficient engines, to reduce emissions, some of which may have limited potential to generate future reductions, experts we surveyed expect a number of additional technological, operational, and alternative fuel improvements to help reduce aircraft emissions in the future. However, according to experts we interviewed, some technologies, such as advanced airframes, have potential, but may be years away from being available, and developing and adopting them is likely to be costly. In addition, according to some experts we interviewed, incentives for industry to research and adopt low-emissions technologies will be dependent to some extent on the level and stability of fuel prices. Finally, given expected growth of commercial aviation as forecasted by IPCC, even if many of these improvements are adopted, it appears unlikely they would greatly reduce emissions by 2050. A number of policy options to address aircraft emissions are available to governments and can be part of broader policies to address emissions from many sources including aircraft. Market-based measures can establish a price for emissions and provide incentives to airlines and consumers to reduce emissions. These measures can be preferable to other options because they would generally be more economically efficient. Such measures include a cap-and-trade program, in which government places a limit on emissions from regulated sources, provides them with allowances for emissions, and establishes a market for them to trade emissions allowances with one another, and a tax on emissions. Governments can establish emissions standards for aircraft or engines. In addition, government could increase government research and development to encourage development of low-emissions improvements.
With responsibility for about 375,000 passenger vehicles and light trucks,the federal government operates one of the largest motor vehicle fleets in the United States. The federal motor vehicle fleet provides transportation to support government activities, such as law enforcement and health care. The federal fleet costs the federal government more than $1 billion a year for vehicle acquisition, maintenance, operation, and disposal. The federal fleet represents significant budgetary expense for the government and deserves to be well managed to provide appropriate and reliable transportation at the least cost. The General Services Administration (GSA) has both a regulatory and an operational role concerning federal motor vehicle fleets. Under the Federal Property and Administrative Services Act of 1949, GSA is responsible for issuing governmentwide policy for federal fleet management functions. These functions are the acquisition, operation, maintenance, and disposal of motor vehicles. In addition, GSA has regulatory responsibilities, regarding (1) replacement standards for government-owned vehicles, (2) the size of passenger vehicles, and (3) the use of alternative fuels for federal vehicles. Since 1954, GSA has operated its Interagency Fleet Management System (IFMS) to provide vehicle fleet services to federal agencies. The IFMS vehicle fleet, which GSA leases to other federal agencies, represents approximately one-third of the federal fleet. Federal agencies own most of the remaining two-thirds of the fleet; and about 7,500 vehicles, or about 2 percent of the fleet, are commercially leased from the private sector. Regardless of the source of their vehicles, federal agencies are responsible for the day-to-day management of their motor vehicle fleets. This means that each agency is to ensure that (1) it has the appropriate number and types of vehicles to meet its objectives and (2) that these vehicles are operated in the most cost-efficient manner. Table 1.1 shows the composition of the federal motor vehicle fleet by agency. In 1986, Congress enacted the Consolidated Omnibus Budget Reconciliation Act of 1985, or COBRA. Congress believed that significant savings could be achieved by finding more cost-efficient means to acquire, operate, maintain, and dispose of motor vehicles in federal agencies. As a consequence, COBRA required the heads of federal agencies, the Administrator of GSA, the Director of the Office of Management and Budget (OMB), and the Comptroller General to take certain actions to improve the management and efficiency of the federal fleet and to reduce the costs of their operations. Specifically, COBRA required each agency that operates more than 300 motor vehicles to identify, collect, and analyze all of the costs of their motor vehicle operations. In addition, each agency was to conduct a comprehensive, detailed study to compare the costs and benefits of its motor vehicle operation with those of (1) GSA’s IFMS, (2) private sector firms, or (3) any other means that could be less costly to the federal government. GSA is responsible, in cooperation with OMB, for issuing regulations to implement the law. OMB is required to monitor agency compliance and to annually provide Congress with a summary and analysis of statements submitted by agencies concerning the operations of their motor vehicle fleets. COBRA required us to report on actions OMB, GSA, and the agencies took to comply with the act’s requirements. Accordingly, in 1988, we reported on the actions of selected agencies to comply with the act’s requirements and determined that COBRA did not specify a method for compiling cost data or conducting cost comparisons. We also reported that most agencies had not conducted cost-comparison studies. At the request of the Committee on Governmental Affairs and Congressman Bob Franks, our objectives were to (1) summarize obstacles faced by federal agencies in achieving cost-efficient fleet management and (2) identify examples of the management practices that managers of public and private fleets considered to be essential to cost-efficient fleet management. To describe obstacles faced by federal agencies’ in achieving cost-efficient fleet management, we met with members of the President’s Council on Management Improvement’s (PCMI) Interagency Task Force on Federal Motor Vehicle Fleet Management. In 1991, the PCMI established the task force to identify obstacles to cost-efficient fleet management and provide recommendations to improve it. The task force consisted of fleet managers from the larger federal agencies—agencies that owned or leased fleets of 300 or more vehicles—and representatives from GSA and OMB who had fleet management responsibilities. To identify the management practices that managers of public and private fleets considered essential to cost-efficient fleet management, we conducted interviews at two levels. First, we contacted representatives from fleet industry associations and fleet management consultants. We did so to identify private sector firms and state governments that public and private sector fleet managers recognized as having well-managed fleets or using new techniques to improve their fleets and reduce costs. Second, we interviewed fleet managers and other officials from these companies and state governments to learn the practices that they used to make their fleets more cost-efficient and that could be applicable to federal fleets. We asked them to provide examples demonstrating the benefits of the practices they described. However, we did not independently evaluate the extent to which these practices improved the fleet management of the organizations visited. Once we developed a list of these practices, we contacted fleet management experts to validate the importance to cost-efficient fleet management of these management practices. The fleet management experts we contacted are listed in appendix I. Also, as a result of our discussions with the fleet managers and fleet management consultants, we obtained and reviewed documents that provided further detail on obstacles to and practices of fleet management. These documents included fleet management studies by federal agencies, state governments, consulting firms, and fleet industry associations. In addition, we gathered further information on the management practices through literature searches. We did our work from April 1993 through June 1994 in Albany, New York; Washington, D.C.; and at the locations of the fleet management experts visited in accordance with generally accepted government auditing standards. On October 21, 1994, we discussed the information in this report with OMB’s Deputy Director for Management, and his comments are presented on page 33. Motor vehicle fleets need to be managed in a cost-efficient manner to provide appropriate and reliable transportation. Fleet managers in the public and private sector told us that uniform policies and procedures, sound information for making decisions and assessing performance, and predictable funding for vehicle replacement are essential elements for managing a cost-efficient fleet. However, the PCMI’s Task Force on Federal Motor Vehicle Fleet Management found that federal agencies faced obstacles to managing a cost-efficient fleet and complying with COBRA requirements. In addition, the task force concluded that agencies still were not complying with the COBRA requirement to determine the most cost-efficient fleet alternative. In its July 1992 report, the task force identified a number of obstacles that prevented federal agencies from managing the fleet cost-efficiently and made specific recommendations for addressing these obstacles. With the concurrence of the PCMI, the task force also assigned various agencies the responsibility for further study and implementation of the recommendations. The most significant of the obstacles identified by the task force were the following: Agencies lacked uniform guidance to help them perform valid comparisons of fleet costs and benefits between their agencies’ fleets and those of other alternatives, such as GSA’s IFMS and private sector firms. Agencies did not have sufficient basic vehicle information or complete and timely agency data collection efforts to help them efficiently manage their fleets and assess their performance from acquisition through disposal. Unpredictable funding and restrictive agency solicitations limited agencies’ ability to select a more cost-efficient alternative for managing and replacing their fleets. The task force also identified other obstacles to cost-efficient fleet management. However, the ones we mentioned—guidance, information, and funding—related most directly to what fleet management experts in the public and private sector told us they considered to be the essential elements of fleet management. COBRA required agencies to compare the costs of operating their fleets with the costs of IFMS and those of private sector fleets so that agencies can determine the least costly method of managing their fleets. However, the task force found that COBRA’s objectives—for agencies to have efficient and cost-effective fleet management—were not being met. One reason agencies were not making cost comparisons was the lack of uniform guidance for them to make such cost comparisons. In its 1992 report, the task force concluded that agencies lacked uniform guidance, a finding similar to one in our 1988 report, for performing valid COBRA cost comparisons. Specifically, the task force found that the absence of uniform guidance made it difficult to share, consolidate, or compare information on the operations, costs, and benefits of the agencies’ fleets with information on other fleet alternatives. Because of this lack of uniform guidance, the task force concluded that agencies were confused about how to structure and conduct COBRA cost-comparison studies that would yield meaningful and equitable results. As a result, the task force said, some agencies had invested what they described as significant resources, i.e., money and staff, to conduct studies that were subsequently found to have had limited value. Other agencies had not conducted the studies at all. According to the OMB officials responsible for monitoring COBRA motor vehicle cost-comparison studies, only one agency—the Internal Revenue Service (IRS)—had completed an acceptable cost comparison, as of June 1994. According to the OMB officials, IRS’ 1991 cost comparison was acceptable because it compared the costs of operating IRS’ vehicle fleet, GSA’s IFMS, and a private sector fleet. As recommended by the task force, in 1993 OMB issued uniform guidance—(1) minimum quality standards, (2) a cost-comparison handbook, and (3) a cost accounting guide—for conducting cost-comparison studies. In March 1993, OMB developed minimum quality standards for the acceptance of past agency efforts to comply with COBRA requirements. Also in March 1993, OMB issued interim guidance through its Federal Motor Vehicle Fleet Management Cost Comparison Handbook, which agencies were to use in conducting their COBRA cost comparisons. In addition, in May 1993 OMB issued additional interim guidance, titled The Federal Motor Vehicle Fleet Cost Accounting Guide, to resolve agency questions concerning cost elements and cost accounting standards for managing motor vehicle fleets. The guide lists principles and standards for agencies to determine costs, including obligations and outlays incurred in the operation, maintenance, acquisition, ownership, and disposition of federal motor vehicles. Although the standards, handbook, and guide had not been finalized, OMB, through the task force, advised the agencies to use them. In 1993, the task force supplied federal agencies with three options for complying with COBRA: (1) rely on past agency COBRA cost-comparison studies if they met the March 1993 minimum quality standards, (2) use the 1993 motor vehicle cost-comparison handbook and accounting guide to conduct COBRA cost comparisons, or (3) rely on the results of a comprehensive analysis recommended by the task force. The comprehensive analysis was to be a pilot project conducted by certain agencies to test alternative ways of conducting COBRA cost comparisons. However, according to OMB officials, agencies still had not complied with the minimum quality standards, nor had they completed any cost-comparison studies using the 1993 cost-comparison handbook and accounting guide as of June 1994. Also, agencies had not conducted the comprehensive analysis. However, the PCMI’s task force has not met since October 1993, and no agency is ensuring that the comprehensive analyses and other corrective actions recommended by the task force to assist agencies in meeting COBRA requirements are properly implemented. As recommended by the National Performance Review, in October 1993, the President established the President’s Management Council (PMC) to ensure that the reforms adopted as a result of the National Performance Review are implemented throughout the executive agencies. The National Performance Review report also said the President should update the Executive Order that established PCMI and revise its role in relation to the new PMC. However, an OMB official said that an executive order to do this had not been drafted and no decision had been made by members of the PCMI on what their new role should be in relation to the new President’s Management Council. As a result, the fleet task force is not active and its future mission has not been defined. A good management information system should provide the federal agency fleet manager with timely, accurate, and complete information on the costs of acquiring, operating, maintaining, and disposing of vehicles. Such information is vital to agencies for doing COBRA cost-comparison studies, according to the guidance issued by OMB, and for providing the central monitoring required by COBRA. Also, the system should permit the fleet manager to conduct ad hoc analyses to help identify opportunities for reducing costs and improving a fleet’s performance. GSA collects such information for the IFMS fleet. However, according to the task force’s 1992 report, other federal agencies generally lacked such basic information to effectively and efficiently manage their fleets. For example, according to the task force, many federal agencies did not have complete and timely information on vehicle maintenance and repairs. Moreover, the task force reported that agencies often lacked information on their fleets, such as the age, mileage, geographic location, and usage of the vehicles in their fleets. The task force found that inadequate systems and data collection efforts contributed to the agencies’ lack of this critical information. The task force also found that agency systems varied in comprehensiveness and sophistication, ranging from manual systems and personal computers to IFMS’ comprehensive database of fleet information. For example, the Department of Agriculture, a task force participant, recognized the importance of improving its information on the costs, status, and condition of its fleet in its 1993 internal assessment of its fleet management information systems. In the assessment report, Agriculture officials concluded that these systems lacked considerable data. In addition, they concluded that Agriculture’s various departmental components had erroneous and inconsistent data, which made using the data for purposes of management and analysis difficult. The task force further noted that even when agencies collected vehicle information, it may not have been useful, because it was inadequate or outdated. In March 1994, the Department of Transportation’s Inspector General reported an example of such inadequate collection efforts at the Federal Aviation Administration. The Inspector General found that the usage records and vehicle retention justifications required by Transportation were not maintained or were not adequate to support the retention of 70 percent of the vehicles in the sample during the Inspector General’s audit. To do COBRA cost comparisons, the task force said agencies needed to improve their fleet management information systems and data collection efforts. Accordingly, in its report, the task force recommended to the PCMI a comprehensive analysis of federal fleets. The purpose of this analysis would be to define requirements and plans for standardizing the reporting of fleet data. Also, the task force reported that federal agencies needed to determine what information was required to improve the quality of the vehicle maintenance of their fleets. As of June 1994, the task force had not met to assign an agency to manage the comprehensive analysis. Fleet managers in the state governments visited told us that on the basis of their experiences, predictable funding could help federal agencies to recover the full costs of fleet operations and to fund the replacement of vehicles in a timely fashion. Also, to determine whether the private sector is the most cost-efficient alternative, the task force found that agency fleets needed federal solicitations that encouraged private sector participation. However, the task force found that unpredictable funding and restrictive solicitations have limited the use of the most cost-efficient fleet management alternatives. According to a member of the task force, the task force found that having to fund fleet operations through single-year (annual) appropriations may have limited an agency’s ability to replace its vehicles in a timely and economical manner. For example, Department of Agriculture fleet managers found that using directly appropriated funds to replace motor vehicles significantly affected Agriculture’s ability to maintain an adequate replacement schedule. These problems occurred because funding to replace vehicles could not always be predicted. As of 1994, the owned vehicles in Agriculture’s fleet, which were purchased with funds appropriated for such purposes, were an average of 10 to 11 years old. This was 4 to 5 years beyond the 6 years that Agriculture officials said they considered to be an economical replacement period. Agriculture’s officials said that the age of these vehicles resulted in significant downtime, high repair and maintenance costs, unreliable transportation, and increased fuel consumption. To solve the problem of unpredictable funding, Agriculture said that a revolving fund would enable it to maintain an up-to-date fleet, which would be capable of meeting mission requirements at a reasonable cost. Agriculture pointed to its Forest Service fleet, which it believed had operated efficiently through the use of a revolving fund. Agriculture estimated that updating the vehicles for the rest of its fleet would save approximately $30 million annually. GSA operates a revolving fund for its IFMS fleet for which agencies pay a rental charge to cover GSA’s fleet operations costs, thereby reducing GSA’s need for appropriations from Congress. The task force recommended exploring three alternatives to funding fleet operations. These alternatives were single-year appropriations, revolving funds, and multiyear appropriations. At the time of our review, the task force had not met to explore these alternatives. Through discussions with private sector managers, the task force identified restrictions to solicitations because of statute or agency requirements that contributed to the private sector’s limited participation in the operations of federal fleets. These restrictions included some agency requirements that private sector firms bidding to provide fleet services to federal agencies were to provide all fleet management functions from acquisition through disposal rather than just one or more of these functions; meet an agency’s fleet needs for the entire country, including isolated locations, rather than specific geographic locations; and meet delivery time frames, such as replacing an agency’s entire fleet within 90 days of contract award, that the task force found the private sector viewed as unrealistic. In addition, all agencies must certify that their subcontractors meet wage standards in the U.S. Department of Labor Service Contract Act of 1965 that tie wages to prevailing local wage rates. The task force concluded that these requirements would have to be changed to promote private sector participation in federal fleets. Accordingly, the task force recommended a comprehensive feasibility study to determine how these restrictive solicitations could be eliminated to encourage private sector participation and improve cost-efficiency. Specifically, this feasibility study would address whether future agency solicitations for meeting fleet needs could omit the agency requirements that contractors be responsible for all fleet management functions and for the entire country. At the time of our review, the proposed feasibility study had not been conducted. The task force did not make any specific recommendations to change the requirements for delivery time frames. Also, the task force further recommended that OMB explore having the Department of Labor waive the statutory requirement that the private sector fleet firms certify that their subcontractors pay prevailing local wage rates. At the time of our review, OMB and Department of Labor officials had not begun to discuss the possibility of waiving the wage standard certifications. Eight years after the passage of COBRA, most agencies still did not have the needed cost-comparison studies, sound information, and proper accounting of costs in place to identify the least costly method to operate their fleets as required by the act. In our view, given the significant budgetary expenditure for federal fleets, the agencies’ failure to conduct required cost-comparison studies and the lack of sound information and proper accounting of costs to better enable agencies to manage their fleets in an efficient and effective way are management weaknesses. To help correct these weaknesses, we looked to the private sector and state governments to identify recognized management principles for effective fleet management. A common theme of the managers of public and private sector fleets we visited was their statement that fleet managers needed to adopt a cost-conscious culture throughout their organizations and, as part of this culture, to apply recognized practices to improve fleet management. Budget constraints, competition, and the need to cut costs have led managers from the state governments and private sector firms we visited to reexamine the role of fleet management within their organizations. These managers told us they recognized the need to have a cost-conscious culture in which they shifted the emphasis of their fleet management role from simply purchasing vehicles, parts, and services to one of making continuous improvements that would lead to reduced costs and improved overall efficiency of the fleet. As part of this cost-conscious culture, fleet management experts told us that top management made fleet managers accountable for identifying improvement opportunities, such as determining the right size of a fleet, and for putting these improvements into effect. In this culture, the experts noted that fleet managers served as in-house consultants to advise their customers in the rest of the organization on ways to reduce their vehicle costs and to use their vehicles more efficiently. Accordingly, fleet managers and their customers applied what they deemed to be essential management practices to accomplish these goals. For example, increasing budget constraints caused one private sector firm to adopt a more cost-conscious culture. Introducing a cost-conscious culture enabled this firm’s fleet manager to centralize fleet management and reduce fleet costs by contracting out for fleet maintenance and information systems support. In another example, a state government fleet manager said that an increased emphasis on cost-consciousness in his state had enabled him to improve vehicle usage, better collect and analyze data on vehicle cost and performance, and identify better ways to fund vehicle replacement. The views of these fleet managers reinforced the findings in our February 1992 report on the cultural changes introduced by nine companies that were concerned about inventory management. These companies used a combination of techniques to introduce cultural changes, including training employees and allowing them to participate in making management decisions. Also, their cultural changes typically included a greater awareness of the needs of customers and a recognition of the need for innovation. Fleet industry officials identified five management practices that they believed were essential to cost-efficient fleet management. These practices were conducting utilization assessments to determine the right size of the fleet and to establish a baseline for fleet operations; having information and supporting management information systems to enable managers to make sound decisions and assess performance; comparing, or benchmarking, the cost and performance of a fleet with those of the best fleets; funding the fleet through a revolving fund; and centralizing fleet management responsibilities to (1) establish written policies, procedures, and other guidance; and (2) identify opportunities for improving fleet cost-efficiency. Typically, a vehicle utilization assessment to determine the appropriate fleet size is the crucial first step in reforming a vehicle fleet operation, according to the fleet management experts. As one of the experts put it, a utilization assessment is the quickest way for a fleet to become more cost-efficient. When performed properly, a utilization assessment creates an accurate snapshot of the state of the fleet. In addition, the experts explained that a utilization assessment will identify opportunities to streamline the size and composition of fleets through vehicle reduction, reassignments, and increased sharing of vehicles. A fleet consulting firm estimated that utilization assessments can result in savings of more than $1 million per year for large fleets of 5,000 or more vehicles. One of the fleet management experts said there are two steps to doing a utilization assessment. First, establish parameters, plans, and guidelines for the right sizing efforts; and second, conduct the utilization assessment, which should address the frequency and purpose of use, vehicle age, and condition of the existing fleet; and possible alternatives to current vehicle assignments, such as shared use of vehicles, use of privately owned vehicles, and rentals. A consultant for a local government provided an example of how a utilization assessment can reduce costs. The consultant examined the composition of the fleet of about 340 vehicles, its size, and the way its vehicles were being used. On the basis of this assessment, the consultant made recommendations to (1) refine and enforce citywide standard vehicle utilization tracking procedures (e.g., purpose, miles, hours); (2) reduce the fleet size through pooling and use of personal vehicles for low-mileage users; and (3) enforce the guidelines on the purchase of lower cost vehicles. The consultant reported having identified $1.2 million in potential savings over a 5-year period if these actions were taken. In another example, a state government reported that a team of its fleet officials conducted a utilization assessment that concentrated on fleet size and type. Through this assessment, the team identified considerable cost savings while the fleet still met the state’s needs. They did so by (1) replacing 41 full-size vehicles with mid-size vehicles; (2) reducing the size of the fleet for 4 state-level departments by 42 vehicles; (3) replacing high-mileage, high-maintenance vehicles in other state departments with the 42 vehicles; and (4) disposing of the replaced vehicles. The state reduced its cost per vehicle by $700 when it replaced full-size vehicles with mid-size vehicles and achieved a one-time savings of $796,000 when it reduced fleet size and replaced high-mileage vehicles. In addition, as a result of this assessment, the state’s fleet management planned to replace its full-size vans and station wagons with minivans, which, according to the assessment team’s calculations, had a lower purchase price and operational cost per mile. The fleet management experts explained that after a completed utilization assessment, fleet utilization should be tracked as an ongoing practice through the organization’s management information systems. All of the fleet experts with whom we met said that having the needed information supported by good management information systems is essential for cost-efficient fleet management. They said that to operate an efficient, low-cost fleet, a manager must have an information system that captures all direct and indirect costs associated with operating a vehicle. They added that accurate and instantly available data are essential for the management of virtually every fleet activity, including vehicle acquisition, operations, maintenance, and disposal. Specifically, these experts said that to make informed management decisions managers needed information on (1) the profile of the fleet and its life-cycle history (i.e., acquisition through disposal) on each vehicle; and (2) sufficient information to compare fleet costs and benefits between the organization’s fleet and those of other organizations. They also said that an organization’s management information systems needed to have the capacity to not only provide this basic information but to permit the fleet manager to identify trends and patterns and to conduct ad hoc analyses of different scenarios of fleet mixes—i.e., types of vehicles—and costs. Thus, they said that it was not enough to simply maintain this information; it was also necessary to use it to make key decisions in planning and managing the fleet. An official from one private sector firm described how the firm’s management information system was the cornerstone of its fleet management. By having a system with access to detailed cost information on vehicle maintenance, safety, and resale value, the fleet manager was able to achieve significant cost savings by changing the corporation’s fleet mix. He determined, on the basis of his analysis of these fleet costs, that converting the entire fleet to minivans, at a total savings of $62 million, would be more cost-effective. In addition, the firm’s system operated 24 hours a day as an emergency hotline in the event that vehicle users need information or guidance to handle after-hours questions and problems. An official from another firm discussed how the firm used its system to identify a systemic problem with a particular part that was previously treated as an unrelated series of isolated incidents. The corporate fleet manager used his information system to determine the frequency of seat bolt breakages on a particular model. These breakages increased the firm’s exposure to car repairs, personal injury, and lawsuits. As a result of the manager’s analysis, the firm was able to get the manufacturer to make the necessary safety changes and reimburse the firm for the costs of bolt repairs and related liabilities. At the firms we visited, the fleet managers were responsible for their firms’ fleet information. However, most of these firms contracted out for fleet information systems and services. The contractors used were the fleet management services companies that had the largest databases on motor vehicle fleet management in the country. By using existing service company systems, the firms avoided the costs of operating their own systems, had readily available information on their fleets, and could obtain information on other firms that enabled them to compare their present and projected costs and performance with those of other similar fleets. In our view, once agencies have conducted utilization assessments and are collecting the right information to make sound decisions, they are in the position to benchmark the costs and performance of their fleets. At that point, agencies would be able to make cost comparisons between their costs and those of other public and private sector fleets. Many of the fleet experts with whom we visited agreed that fleet managers must be aware of how their fleets compared to others and how units within their fleets compared to each other. According to the fleet management experts, benchmarking is a learning process that begins when one organization looks at the best practices of other firms for a point of reference. An organization benchmarks by comparing its processes with those in other firms and developing data about cost and performance. Through benchmarking, the experts found that organizations have been able to identify the best practices and methods of operating their fleets. For example, one firm told us that by using its fleet management service company’s extensive database of fleet cost information on different firms, it was able to successfully benchmark its fleet costs. Numerous categories of the firm’s fleet costs, such as administrative expenses, maintenance, depreciation, and original acquisition, were compared with the costs of other firms in the fleet industry. Through this benchmarking, the contractor reported having identified potential annual savings of $19.8 million—$6.4 million in cost reductions and $13.4 million in productivity enhancements. Another firm told us it used the database of its fleet management service company to benchmark its motor vehicle accident rates and associated costs with those of other companies’ fleets. After determining that its accident rate and costs were higher than those in the benchmarked firms, the firm initiated a driver’s safety program in 1993 and added safety features, such as air bags and antilock brakes, to its vehicles. Fleet managers for the state governments that we visited said they did not formally benchmark their fleet costs and performance. However, the state government officials emphasized the importance of generally knowing how their states compared with the rest of the fleet industry. They said they got information for these comparisons through informal conversations with other public and private sector fleet managers and reviews of industry norms from fleet industry periodicals. In 1993, a fleet management expert for the National Association of Fleet Administrators (NAFA) reported the results of its benchmarking project to establish a database on the cost and performance of public sector fleets.The project report identified four sources for benchmarking data: internal trends, peer comparisons, industry norms, and best of class. The best of class data were based on the performance of the fleets that NAFA considered to be among the best managed fleets in the industry. Through this project, NAFA developed a benchmarking database that its officials said can be customized to meet the specific needs of public sector fleets. For example, the database contained data for fuel and maintenance costs per mile, vehicle age, and miles between breakdown. According to the project report, government agencies can use the benchmarked data from this database to identify opportunities to improve the quality and reduce the costs of their fleets. Nearly all of the fleet experts with whom we met recommended a revolving fund for governmental vehicle fleets. Under this funding approach, a fleet management program functions much like an in-house leasing company, acquiring vehicles and equipment and passing their costs on to fleet users by means of a charge-back system. The proceeds of user charges are to be accumulated in a revolving fund and used to defray costs, including vehicle replacements. If revolving funds are properly designed and implemented, they can provide sufficient funds to consistently replace fleet assets in a timely manner, according to the fleet experts. They said that a properly designed revolving fund would enable managers to charge users for full cost recovery, which also requires the support of an effective management information system to help properly account for costs. In addition, the fleet experts explained that using a revolving fund makes costs more visible to vehicle users, thereby creating powerful incentives for users to be more cost-conscious in their use of vehicles and even to dispose of vehicles that they do not really need. Finally, these experts said that a properly structured revolving fund would enable managers to more fully identify costs associated with operating a fleet, thus helping an organization to select the most cost-efficient alternative to meet its fleet needs. The state governments we visited all used revolving funds, which they referred to as internal service funds, to fund their fleet operations. The fleet managers in these states said it would be extremely difficult to operate a cost-efficient fleet without the ability to charge customers to fund operations and replace vehicles. These officials said that using a revolving fund to pay for the purchase of replacement vehicles provided stable and timely funding to replace vehicles. They also said that by using revolving funds, agencies can avoid the underfunding of fleet replacement, which can increase the age of the fleet and ultimately the cost of it. The firms we visited applied a concept that is similar to a revolving fund by charging their operating units for the actual cost of acquiring, operating, maintaining, and disposing of their vehicles. The firms’ fleet officials said that charging actual costs, including indirect costs, made fleet costs more visible to the business units and provided users with the incentive to be more judicious in their vehicle use. A motor vehicle fleet represents a sizable capital investment and a substantial operating expense. Fleet management experts and consultants told us this major financial investment deserves professional management. In discussions with these experts, it became clear that the role of a fleet manager was, in their view, not simply to acquire vehicles. They said that to be effective, the organization’s fleet manager should carry out the following responsibilities: establish and monitor written policies and procedures to be used by vehicle users throughout the organization; collect and analyze fleetwide data, including fleet costs and performance; look for opportunities, using the previously mentioned management practices, to improve fleet operations and service to users; and serve as the organization’s in-house consultant in promoting a corporate culture that focuses the users on reducing their vehicle costs. Thus, according to the fleet management experts of the organizations visited, it is a fleet manager’s responsibility to ensure that there are written policies and procedures for (1) fleet administration, acquisition, operations, maintenance, and disposal; and (2) the comparison of the organization’s fleet costs and benefits with those of other organizations. The experts also said that fleet management should use an effective management information system to ensure that appropriate information is collected and analyzed to monitor vehicle costs, utilization, and mix. In addition, they said that fleet managers should ensure that the organization’s funding is predictable and apply benchmarking. Finally, these experts made two other key points about the placement and role of the fleet manager in the organization. First, they said that fleet management responsibilities needed to be centralized so that the fleet manager would have a broader perspective on the organization’s fleet. The manager could then better compare the work units of the fleet and compare those work units with similar work units of other fleets. They also considered centralization important to avoid duplication of effort and to achieve economies of scale. Second, they pointed out that these responsibilities must be carried out by the organization even when vehicles or fleet services are obtained from alternate sources. The organization must carry out such responsibilities even if the alternate vehicle source provides administrative services, such as system support, recordkeeping, or maintenance. In addition to the management practices discussed by the experts, there appear to be benefits from interagency cooperation in discussing governmentwide fleet management issues. The task force provided an excellent forum through monthly meetings for fleet managers from various agencies to exchange ideas on improving federal fleet management. Also, as described in chapter 2, the task force made recommendations in its 1992 report to the President’s Council on Management Improvement to address the obstacles to cost-efficient fleet management that it had identified and had made some progress in implementing those recommendations. These task force recommendations were endorsed by the National Performance Review, which was established in 1993 to improve governmentwide operations. As recommended by the National Performance Review, in October 1993, the President established the President’s Management Council (PMC) to ensure that the reforms adopted as the result of the National Performance Review are implemented throughout the executive agencies. The functions of the Council include (1) improving overall executive branch management and ensuring the adoption of new management practices throughout the government; and (2) identifying examples of, and providing mechanisms for, interagency exchange of information about best management practices. The Council is also to consider the management reform experience of corporations, nonprofit organizations, and state and local governments. The National Performance Review report also said the President should update the Executive Order establishing the PCMI and revise its role in relation to the new President’s Management Council. However, an OMB official said that an executive order doing this had not been drafted, and no decision had been made by members of PCMI on what their new role should be in relation to the new President’s Management Council. As a consequence, the PCMI’s task force on fleet management has not met since October 1993, and no organization is acting as an interagency focal point for federal fleet management issues. While OMB and GSA have oversight responsibilities for federal fleets under COBRA, we believe that the interagency cooperation and communication provided through an independent body like the task force could be an effective way of identifying and addressing common fleet management concerns. An interagency body, like the task force consisting mainly of agency fleet managers, could provide a forum for discussions on fleet problems and solutions and assist the management expertise of all members. Through its meetings, such an interagency body could encourage and support agencies in adopting more innovative practices to improve their fleet management. Although officials from federal agencies generally agreed that the objective of COBRA was to determine the most cost-efficient fleet alternatives, including using IFMS and private sector firms, most agencies have been unsuccessful in fulfilling this objective. Since COBRA was enacted, most federal agencies have continued to operate their fleets without considering other alternatives. This fact appears to be primarily due to the obstacles reported by the task force—a lack of uniform guidance to perform COBRA cost comparisons, insufficient basic vehicle information, and unpredictable funding processes and restrictive solicitations. OMB has issued interim guidance to do COBRA cost comparisons, and the task force recommended actions to correct the other obstacles. However, since the PCMI and its task force have not defined their roles in relation to the new President’s Management Council and the task force has not met since October 1993, no organization is ensuring that the task force recommendations are being addressed. As a result, the agencies’ management weaknesses persist because of their failure to conduct cost comparison studies and the lack of sound information needed to identify the most cost-efficient source of vehicles and fleet services. Without doing a fleet study, agencies have no way of knowing whether they have cost-efficient fleets. To operate cost-efficient fleets, meet COBRA requirements, and correct the management weakness, federal agencies need to recognize and promote cost-conscious environments to enable fleet managers to operate cost-efficient fleets. On the basis of the experience of the private sector firms and states we visited, it appears federal agencies could make their fleets more cost-efficient by using or expanding their use of the management practices that fleet management experts have stated were critical to improving fleet performance and efficiency. These practices include utilization assessments; sound information systems; benchmarking; and, when authorized by law, the establishment and use of revolving funds. Fleet management experts also emphasized the importance of having centralized fleet management to provide a uniform and cost-conscious fleetwide focus. When used together by a cost-conscious fleet manager, fleet management experts said these practices would provide information for (1) evaluating the fleet’s cost and performance; (2) identifying opportunities for improvement; and (3) selecting the most cost-efficient alternative for vehicles and fleet services, as required by COBRA. As of June 1994, no interagency forum, such as the task force, served as a focal point to identify and address governmentwide fleet management issues and concerns. Such a forum could ensure that the task force recommendations are addressed and the previously mentioned management practices are tested to determine the potential for improving fleet management. To improve the cost-efficiency of federal fleets and to help them comply with COBRA requirements, we recommend that the Director of OMB, the organization responsible under COBRA for monitoring agency compliance, establish a corrective action plan with goals and milestones to monitor and ensure that agencies are conducting cost comparisons as required by COBRA. We also recommend that the Director arrange for agency pilot projects to test the potential for improvements and cost savings through the use or expansion of management practices, including utilization assessments; sound information systems; benchmarking; and, when authorized by law, the establishment and use of revolving funds. As part of the pilot projects, we recommend that the Director discuss with task force members the merits of having a central manager for each agency fleet who can establish and monitor written policies and procedures to be used by vehicle users throughout the organization; collect and analyze fleetwide data, including data on the costs and performance of fleets; look for opportunities, using the previously mentioned management practices, to improve fleet operations and service to users; and serve as the organization’s in-house advocate in promoting a corporate culture that focuses the users on reducing their vehicle costs. In addition, we recommend that the Director of OMB establish a plan with goals and milestones to monitor and ensure that the pilot projects are successfully completed; and reaffirm and clarify the role of the PCMI’s task force, or establish a similar interagency body that has the authority to (1) address the task force’s recommendations; (2) serve as an interagency forum for governmentwide fleet management issues; and (3) work with agencies to evaluate, and, if appropriate, eliminate or reduce restrictive agency solicitations that discourage private sector participation in federal fleets. We met with OMB’s Deputy Director for Management on October 21, 1994, to discuss the information in this report. He generally agreed with the report’s findings and said they were consistent with the work of the PCMI task force on federal fleet management, which was endorsed by the National Performance Review. He also generally agreed with the report’s recommendations. He said, and we agree, that decisions have to be made on how to address and implement the recommendations, such as establishing the authority and appropriate management level needed by an interagency body to make improvements in fleet management and to reduce fleet costs.
Pursuant to a congressional request, GAO reviewed the federal government's management of its motor vehicle fleet, focusing on: (1) obstacles to achieving cost-efficient fleet management; and (2) public and private fleet management practices that might be applicable to the federal fleet. GAO found that: (1) obstacles to cost-efficient federal fleet management include the lack of uniform guidance for conducting valid cost-comparison studies, insufficient vehicle information, unpredictable funding, and restrictive agency solicitations that limit private-sector competition; (2) in 1993, the Office of Management and Budget (OMB) issued uniform guidance for conducting valid cost-comparison studies in response to a task force recommendation; (3) most federal agencies continue to operate their fleets without complying with statutory requirements for cost-efficiency; (4) improving fleet management requires a cost-conscious culture; and (5) essential management practices for cost-effective fleet operation include assessing vehicle utilization to determine the appropriate size of the fleet, establishing a fleet operation baseline, having needed information and supporting management information systems to assess performance, comparing costs and performance with the best fleets, funding the fleet through a revolving fund, and centralizing fleet management responsibilities.
About 40 million people globally were living with HIV/AIDS as of December 2003, most of them in sub-Saharan Africa; few have access to treatment. Propelled by recent advances in ARV treatment, PEPFAR is the first U.S. program to seek to dramatically expand HIV/AIDS treatment in resource- poor settings. PEPFAR builds on U.S. bilateral efforts begun in June 2002 to prevent mother-to-child transmission of HIV during pregnancy, labor and delivery, and breastfeeding. In May 2003, P.L. 108-25 established the position of the U.S. Global AIDS Coordinator to lead the U.S. response to HIV/AIDS abroad; the Senate confirmed the Coordinator in October 2003. The office received its initial appropriation in January 2004. About two-thirds of those infected with HIV live in sub-Saharan Africa. More than 50 percent of all HIV infections in the world, and nearly 70 percent of HIV infections in Africa and the Caribbean, occur in the 14 PEPFAR countries. According to WHO, less than 7 percent of the HIV- infected people in need of ARV drugs were receiving them at the end of 2003. UNAIDS reports that about 3 million people died from AIDS in 2003, the vast majority of them in sub-Saharan Africa. The disease has decimated the ranks of parents, health-care workers, teachers, and other productive members of society in the region, severely straining national economies and contributing to political instability. Propelled by recent advances in ARV treatment, PEPFAR is the first U.S. program to seek to dramatically expand HIV/AIDS treatment in resource- poor settings. In the 1990s, medical experts found that new forms of treatment, involving a combination of three drugs, were effective in suppressing the virus and thus slowing progression to illness and death. According to medical experts, data from Brazil, Uganda, and Haiti showed that patients in resource-poor settings adhere well to this complex drug regimen. Adherence to ARV treatment is important: if patients do not take the drugs properly or consistently, the virus in their bodies may become resistant to the drugs and the drugs will cease to be effective. The treatment must continue for life. Since 2000, the price of ARV drugs has dropped considerably, from a high of more than $10,000 per person per year to a few hundred dollars or less per person annually, owing in part to the increased availability of generic ARV drugs and public pressure. In addition, some generic manufacturers have combined three drugs in one pill—known as fixed-dose combinations, or FDCs—thereby reducing the number of pills that patients must take at one time. While major multilateral and other donors allow recipients of their funding to purchase these FDCs, the Office of the U.S. Global AIDS Coordinator currently funds only the purchase of drugs that have been approved by a “stringent regulatory authority,” citing concerns about the quality of drugs that have not demonstrated safety and efficacy to such an authority. Presently, only brand-name drugs meet this standard. As a result, the Coordinator’s Office does not now fund the purchase of generic ARV drugs, including FDCs. However, on May 16, 2004, the HHS Secretary announced an expedited process for reviewing data submitted to the HHS/Food and Drug Administration (HHS/FDA) on the safety, efficacy, and quality of generic and other ARV drugs, including FDCs, intended for use under PEPFAR. To date, only more developed countries have offered ARV treatment on a massive scale. The planned expansion of treatment to millions of people in developing countries under PEPFAR coincides with international efforts to increase the availability of treatment to HIV-infected people in poor countries. These efforts include the launch of the Global Fund in January 2002 and a campaign by WHO, announced in 2003 on December 1 (World AIDS Day), to provide access to ARV treatment to 3 million people by the end of 2005, commonly referred to as the “3 by 5” campaign. (See app. III for more information on global, including U.S., HIV/AIDS funding.) PEPFAR’s goal is to initiate ARV treatment for nearly 2 million people in the 14 targeted countries by 2008. As of February 2004, a total of 78,921 people, or about 4 percent of that goal, were receiving ARV treatment in these countries (see fig. 1). On April 25, 2004, to synchronize international efforts, the Global AIDS Coordinator and his counterparts from UNAIDS, the World Bank, the Global Fund, and other bilateral donors voiced their support for an international agreement to abide by the following principles: (1) that there be one agreed-upon framework for coordinating HIV/AIDS activities among all donors and other partners in each recipient country; (2) that each recipient country have one national AIDS coordinating authority; and (3) that each recipient country have one system for monitoring and evaluating AIDS programs. PEPFAR builds on U.S. bilateral efforts begun in June 2002 under another presidential initiative that focused on preventing mother-to-child transmission (PMTCT) of HIV during pregnancy, labor and delivery, and breastfeeding. This $500 million initiative, formally known as the International Mother and Child HIV Prevention Initiative, and more commonly referred to as the PMTCT Initiative, focused on the same 14 countries as PEPFAR. According to administration officials, the countries were selected based on the severity of their HIV/AIDS burden, the extent to which they have a substantial U.S. government presence, the effectiveness of their leadership, and foreign policy considerations. The initiative focuses on treatment and care for HIV-infected pregnant women and provides a short course of ARV treatment that has been shown to be 50 percent effective in lowering the risk of transmission of the virus in breast-feeding mothers. With the establishment of the Coordinator’s Office, PMTCT Initiative funding and activities were included in PEPFAR. (See fig. 2 for a timeline of international and U.S. efforts to combat HIV/AIDS worldwide.) The agencies primarily responsible for implementing PEPFAR are the State Department, where the U.S. Global AIDS Coordinator is based and reports directly to the Secretary of State; USAID; and the Department of Health and Human Services (HHS). The Coordinator plays an overall coordinating role, and the State Department raises HIV/AIDS issues through diplomatic channels and public relations campaigns. USAID maintains overseas missions in 12 of the 14 PEPFAR focus countries, with personnel trained in procurement and managing grants to foreign entities; it works with NGOs and other entities. HHS’s overseas presence is focused on providing technical assistance and is more recently initiated. HHS/CDC provides clinicians, epidemiologists, and other medical experts who generally work directly with foreign governments, health institutions, and other entities. Within HHS, PEPFAR also draws on expertise from the National Institutes of Health/National Institute of Allergy and Infectious Diseases, which is involved in HIV/AIDS research in PEPFAR focus countries; the Health Resources and Services Administration, which has experience expanding HIV/AIDS and other health services in resource-poor settings in the United States and is providing some assistance in several PEPFAR focus countries; and the Office of the Secretary/Office of Global Health Affairs, which plays a coordinating role on HIV/AIDS within HHS. Other agencies involved in PEPFAR are the Department of Defense, which works on HIV/AIDS issues with foreign militaries, helps construct health facilities, and conducts some research and program activities in PEPFAR focus countries; the Peace Corps; and the Departments of Labor and Commerce, which are involved in HIV/AIDS-related activities in the workplace and with the private sector, respectively. (See fig. 3.) In May 2003, the U.S. Leadership Act established the position of the U.S. Global AIDS Coordinator “to operate internationally to carry out prevention, care, treatment, support, capacity development, and other activities for combating HIV/AIDS;” the Senate confirmed the Coordinator in October 2003. (See app. IV for detailed information on the structure of this office.) The Coordinator has been granted authority to transfer and allocate the funds appropriated to his office among the U.S. agencies implementing PEPFAR in the 14 focus countries and additional bilateral HIV/AIDS programs in other countries. The U.S. Leadership Act authorizing PEPFAR states that not less than 55 percent of the amount appropriated pursuant to section 401 of the act is to be spent on treatment and that at least three-quarters of that amount should be spent on the purchase and distribution of ARV drugs for fiscal years 2006 through 2008. Of the remaining 45 percent, 20 percent should be spent on prevention, 15 percent on palliative care, and 10 percent on orphans and other vulnerable children. Congress appropriated $488 million for the Coordinator’s Office in fiscal year 2004, and the President requested $1.45 billion for fiscal year 2005. The office was formally established in January 2004. It created three mechanisms, or funding “tracks,” to allocate money: track 1, track 1.5, and track 2. Tracks 1 and 1.5 are one-time mechanisms that rapidly allocated funds to expand ongoing activities through Washington, D.C.-based multicountry awards and locally based country-specific awards, respectively. Track 2 serves as an annual operational plan for each country. A portion of the funds for tracks 1 and 1.5 were obligated by a target date of January 20, 2004 and the remainder were obligated by mid-February following congressional notification; budgets for track 2 were submitted to the Coordinator’s Office for review on March 31, 2004, and approved on a rolling basis through early May. Pending congressional review, the Coordinator’s Office expects that agencies will have begun to obligate these funds by the end of June. PEPFAR activities are generally executed through procurement contracts or through grant agreements or cooperative agreements with implementing entities such as NGOs and ministries of health (and/or national AIDS control programs). (See app. V for additional information on initial obligations.) In our structured interviews, we identified the following major challenges to U.S. government agencies in expanding ARV treatment in resource-poor settings: (1) difficulties coordinating with other groups involved in combating HIV/AIDS; (2) U.S. government policy constraints; (3) shortages of qualified health workers; (4) host government constraints; and (5) weak infrastructure (see fig. 4). These challenges were also highlighted by numerous government and nongovernment experts whom we interviewed and in documents we reviewed. (See app. VI for additional analysis of these challenges.) All of the field staff we interviewed in the 14 PEPFAR countries identified problems coordinating with other groups. Nearly all cited problems coordinating with non-U.S. government groups, and slightly fewer cited problems coordinating with other U.S. government entities. Consequences of the coordination problems cited by field staff include duplicate efforts, confusion over standards, and heavy administrative burdens. Twenty-seven of 28 respondents cited challenges coordinating with non- U.S. government groups, particularly with host governments and multilateral organizations. Just over three quarters (22 of 28) of the field staff we interviewed provided examples of challenges to coordination between the U.S. government and the host governments in the PEPFAR focus countries. One of the most commonly cited challenges dealt with host governments’ perceptions. Field staff said that host government officials are often skeptical of donors’ intentions and may question the commitment of donors and the sustainability of new treatment programs, especially when they think that donors are promoting programs that run counter to their national strategies. Similarly, an NGO official working with the host government in one of the 14 PEPFAR focus countries reported that when initial funding plans were created, U.S. field staff for the country ignored existing government and NGO programs. The official said that the plans for this country also did not incorporate any funding for training, which was a stated government priority. In addition, consulting the host government only after funding applications were completed has increased government officials’ skepticism regarding U.S. intentions and programs in this country, according to U.S. field staff. Field staff also noted that it is difficult to coordinate with host governments owing to the governments’ limited human resource capacity. In addition, staff are often hindered by the governments’ slow bureaucratic practices and lack of understanding of U.S. and other donors’ programs and policies. Field staff commented that all of these problems, paired with expedited PEPFAR timelines and consequently compressed consultation time, could increase the challenges faced by the United States in persuading host governments to support PEPFAR plans for expanded treatment. Field staff generally reported the most difficulty coordinating with host governments and multilateral organizations (see app. VII). Sixteen of 28 field staff identified coordination challenges with multilateral organizations (such as the World Bank, the Global Fund, WHO, and other U.N. organizations), with many citing perception issues. Because of the influx of PEPFAR funding, the United States will significantly increase its financial investment in treatment programs, potentially causing other donors to see themselves as overshadowed. Staff noted that before the United States instituted the PMTCT Initiative, UNICEF was the main implementer of these programs. According to field staff we interviewed, when the United States expanded its own programs, UNICEF and other donors felt “steamrolled” by programs that were quickly put in place by the United States with little input from the donor community. Some U.S. staff said that PEPFAR is replicating this unilateral approach. According to these staff, the perception that the United States acts unilaterally is compounded by the fact that, unlike many other donors, U.S. agencies are not allowed to contribute money to other donors’ programs or to pooled host government funding “baskets” for the health and other sectors. The staff noted that some donors therefore indicated that the United States is willing to create duplicative programs. Staff frequently cited the need for the United States to work with the WHO as both the PEPFAR program and WHO’s 3-by-5 campaign begin. Staff said that such coordination is needed to minimize overlapping efforts, confusion over standards, and the administrative burden on host governments and other donors. Finally, while some staff noted that they have not had enough time to coordinate efforts, many said that all stakeholders need to harmonize specific aspects of treatment programs—including treatment guidelines, training schedules and materials, technical approaches, educational and media campaigns, procurement policies, hiring and payment policies, and the collection and reporting of data. The staff indicated that without harmonization, unnecessary duplication and confusion could occur as treatment programs are expanded. Twenty-four of 28 respondents cited challenges in coordinating with other U.S. government agencies, their agency’s headquarters, or the Coordinator’s Office in Washington, D.C. Twenty-two of the field staff we interviewed told us that they face challenges coordinating with headquarters, and 15 of 28 said that they face challenges coordinating with other U.S. government agencies in the field. These challenges were also cited in documents field staff prepared for the Global AIDS Coordinator. Field staff reported that headquarters did not coordinate with them early in the process of developing activities for the PMTCT Initiative and PEPFAR. For example, they expressed concern that headquarters announced intended programs without first notifying staff in the field or giving them the opportunity to discuss the PMTCT Initiative and PEPFAR programs with host governments. Field staff stated that government officials in these countries often regarded such announcements as statements of commitment rather than intention, resulting in overly optimistic expectations of the amounts of funding they might receive from the United States. Also, headquarters’ limited coordination with field staff has made it more difficult for U.S. officials in-country to work with host governments, increasing these governments’ perception that the United States is imposing programs on them rather than seeking their commitment or concurrence, which could impede U.S. efforts to expand ARV treatment. In addition, when discussing coordination problems between the field and headquarters, most field staff said that they were burdened by administrative requirements, during both the PMTCT Initiative and the initial stages of the PEPFAR planning. For example, eight respondents stated that they rushed to complete multiple reporting requirements that were often unclear or redundant. This point was also made in several written communications from the field to the Coordinator’s Office. Three respondents stated that at the same time they were trying to work with their agency counterparts in the field to complete integrated reporting requests from the Coordinator’s Office, they were asked by headquarters to prepare duplicative, agency-specific reports, which further compounded their burden. Five respondents indicated that the time spent responding to these requests within the period allotted has directly limited their ability to implement treatment programs. Just over half (15 of 28) field staff also identified coordination challenges among agencies in the field. Most staff that raised interagency issues cited challenges arising from the different agencies’ roles—for example, HHS/CDC has traditionally provided technical assistance directly to foreign governments through cooperative agreements, while USAID has focused on development, primarily by managing grant agreements implemented by NGOs. Staff further stated that as the programs become more coordinated, challenges could arise from agencies’ differing administrative procedures. For example, agencies may have different procurement or hiring policies; agencies entering a program area may find themselves competing with another agency previously dominant in that area; and field staff busy with administrative tasks and program implementation may find little time to communicate with their field counterparts. Twenty-five of the 28 structured interview respondents identified U.S. policy constraints as a challenge that could limit the ability of the agencies implementing PEPFAR to rapidly expand treatment programs. In particular, unclear guidance on whether U.S. agencies can purchase generic ARV drugs, including FDCs, makes it difficult for the PEPFAR agencies to plan support for national treatment programs, some of which use these drugs. In addition, field staff raised concerns that their current contracting capacity will not be sufficient to manage the large influx of funds expected under PEPFAR. Further, differing laws governing the funds appropriated to these agencies—affecting procurement standards and foreign taxation of U.S. assistance—and varying grant requirements used by the agencies may challenge their joint efforts to expand ARV treatment programs. Twenty-one respondents indicated that they had not received adequate guidance on the procurement of ARV drugs, which makes it difficult for the U.S. missions to plan their support of country programs. At least four of the national programs in the PEPFAR focus countries are currently purchasing generic ARV drugs with their own funds or with funds from the Global Fund or other sources, and other countries are considering purchasing them. In addition, in other PEPFAR countries, NGOs such as Médecins sans Frontières (Doctors Without Borders) are also purchasing generic ARV drugs. Given this situation, and the fact that USAID and HHS/CDC have different procurement standards, one USAID official in Africa stated that adhering to the agency’s current standards, which generally require that USAID-financed pharmaceuticals be produced in and shipped from the United States, will present a challenge as more governments purchase generic FDCs to boost adherence. An HHS/CDC official in the same country stated that the host government is buying these drugs with Global Fund money and training doctors and pharmacists to support this regimen. He said that it would complicate the country’s ability to expand treatment if the United States is not able to support such a regimen. In addition, in communications to the Global AIDS Coordinator in mid- to late-2003, U.S. government officials in several PEPFAR focus countries requested guidance regarding local procurement of ARV drugs. A September 18, 2003, communication from Ethiopia observed that several local companies are poised to produce generic ARV drugs, and an October 8, 2003, communication from Uganda stated that generic drugs are available at much lower prices than brand-name drugs. The Uganda communication also stated that procurement of nonlocal goods or services (e.g., U.S. brand-name ARV drugs) to implement PEPFAR will undermine PEPFAR’s goal of enhancing local capacity to fight HIV/AIDS. Almost half (13 of 28) of the structured interview respondents, primarily from HHS/CDC, stated that contracting capacity in the field is a problem. According to documents submitted to the Coordinator’s Office, U.S. government field staff in four countries expressed the need for increased contracting capacity to process procurement of goods and services, such as medical equipment, and increased capacity to award and administer contracts, grant agreements, and cooperative agreements with implementing organizations to allow rapid expansion of treatment under PEPFAR. Further, a June 2003 communication summarizing lessons learned from the PMTCT Initiative stated that HHS/CDC, which uses the embassy contracting system, has experienced considerable delays, funding level ceilings, and other difficulties in processing contractual transactions. HHS/CDC uses the embassy contracting system because it does not have contract officers in the field. The communication stated that these difficulties raise concerns that the embassy system will not be able to handle the number of contracts and inflow of funds needed to expand treatment under PEPFAR. Two HHS/CDC respondents cited embassy spending limits as a problem. One HHS/CDC respondent explained that the embassy in his country can process purchase orders for up to $100,000 but that orders exceeding that amount require additional consultation in Washington, a process that can take 4 to 6 months. He added that the $100,000 ceiling will be reached quickly under PEPFAR and that the embassy procurement system is designed for buying items like furniture rather than evaluating, awarding, and managing long-term contracts or grant agreements with implementing partners. Another HHS/CDC respondent stated that it takes time to familiarize embassy personnel with the specifications for certain medical equipment related to ARV treatment. Moreover, he stated that if the equipment is specialized, it may have only one supplier, causing additional delays for the embassy to justify sole sourcing. When questioned about these examples, HHS/CDC contract officers at headquarters stated that a time frame of several months is not unusual and that the process could take just as long regardless of whether it went through the embassy, HHS/CDC headquarters, or an HHS/CDC field office. Although HHS/CDC field staff articulated more concerns regarding inadequate contracting capacity in the field, the PMTCT Initiative summary stated that the current number of USAID contract officers in the field will be insufficient to facilitate the number of contracts and large amount of funds needed to meet PEPFAR treatment goals. Another communication, dated December 5, 2003, spoke of “an urgent plea for greater contracting officer support,” and a third communication, dated October 16, 2003, cited “a desperate need for contracting agents in-country.” In addition, a USAID respondent in one country and HHS/CDC respondents in three countries stated that more staff in general are needed in the field to expand treatment under PEPFAR. The PMTCT Initiative summary and a communication from Botswana to the Coordinator’s Office offered several suggestions for addressing the problem. These suggestions included changing the contracting system or increasing the number of contract officers in the field and strengthening USAID regional contracting offices with additional personnel and capacity to travel to countries in their region. The PMTCT Initiative summary also recommended that HHS/CDC and its parent agency, HHS, work with the Department of State to review current contracting mechanisms and develop strategies that will allow for greater flexibility and capacity to program PEPFAR funds. According to technical comments on a draft of this report that were submitted jointly by the Coordinator’s Office, HHS, and USAID, the funding requests required of field staff for track 1.5 (rapid allocation of funds to expand ongoing activities) and track 2 (annual operational plans) specifically asked what additional contracting support field staff would need, and some posts have been allotted staffing positions to help fill these gaps. The agencies implementing PEPFAR are subject to varying laws and regulations regarding procurement and foreign taxation of U.S. government assistance, as well as differing grant requirements for audits of grantees. These differences may cause confusion among NGOs—particularly if they are not U.S. organizations—receiving grants from several agencies to implement PEPFAR. Agencies Have Different Procurement and Taxation Rules USAID and HHS agencies, such as HHS/CDC and the National Institutes of Health (HHS/NIH), may require their grantees to use different procurement standards owing to the agencies’ different appropriations legislation and operating procedures. In South Africa, for example, according to USAID officials in that country, the mission obligated all of its money for drug procurement under PEPFAR track 1.5 through the HHS/NIH; that agency’s funds are governed by less restrictive rules for overseas procurement, and HHS/NIH was therefore able to allocate the money quickly to meet a January 2004 deadline. In a January 2004 communication submitted to the Coordinator, officials in that country raised questions regarding the application of different procurement rules. Interview respondents in two other African countries also raised these questions. Similarly, the South African communication to the Coordinator raised questions concerning the application of rules on foreign taxation restrictions. Section 506 of the Foreign Operations, Export Financing and Related Programs Appropriation Act for 2004 (the 2004 Foreign Operations Appropriations Act) prohibits funds appropriated by the act to be used to provide assistance for a foreign country under a new bilateral assistance agreement unless the agreement exempts the assistance from taxation. In addition, the provision states that when a host country assesses taxes on U.S. assistance provided under the act, an amount equal to 200 percent of the total assessment shall be withheld from the fiscal year 2005 appropriations for assistance to that country. Since this restriction applies only to funds appropriated under the 2004 Foreign Operations Appropriations Act, it does not affect funds appropriated to HHS agencies in their own appropriations acts. According to the communication from the field and interviews we conducted with the procurement and legal officials who contributed to it, there could be confusion among agencies and grant recipients over managing funds provided under different appropriations laws, since some of the funds are subject to the taxation provision and some are not. In addition, there was initial confusion over what restrictions would apply to money appropriated to the Coordinator’s Office and transferred to HHS agencies. Since funding for the Coordinator’s Office was appropriated under the 2004 Foreign Operations Appropriations Act, certain restrictions apply to these funds, including the taxation provisions discussed above and procurement restrictions in the Foreign Assistance Act of 1961. Officials from the Coordinator’s Office told us that they recently determined that funds transferred to agencies from that office would still be subject to their original appropriations restrictions. In contrast, funds appropriated directly to HHS for PEPFAR-related activities are not subject to these restrictions. We spoke with the authors of the South African communication and an HHS/CDC grantee, who raised concerns over managing funds that may be subject to differing restrictions. They stated that grantees could be confused by differing sets of rules. The grantee, a U.S. organization, also noted that non-U.S. grantees often lack the resources to ascertain what these rules require. According to HHS officials, the Coordinator’s intention is to set one policy for all U.S. government agencies implementing PEPFAR. Agency Requirements for Auditing Grantees Vary Agencies have varying grant requirements regarding the auditing of foreign recipients of U.S. funds, possibly complicating the agencies’ oversight of organizations implementing PEPFAR. Office of Management and Budget circular A-133 provides uniform auditing standards applicable to all U.S. government agencies with respect to grants awarded to U.S. entities. However, it does not apply to non-U.S. entities that receive funds directly as grant recipients or indirectly as subrecipients. U.S. government officials expect that many such entities will implement PEPFAR. USAID officials noted that their agency requires that any local (i.e., non-U.S.) grantee spending more than $500,000 in U.S. government funds per year be audited annually, for example, by a preapproved local audit firm in accordance with U.S. government auditing standards. HHS/CDC’s audit requirements for non-U.S. grantees differ from USAID’s in that audits must be performed by a U.S.-based firm (which, according to USAID audit officials, could be expensive). HHS/CDC’s audit requirements for non-U.S. grantees also state that audits must be performed according to international accounting standards or standards approved by HHS/CDC. The January communication from South Africa requested that these differences be worked out quickly so that field staff can incorporate appropriate language and cost implications in grant agreements currently being negotiated with organizations that will be implementing PEPFAR. Insufficient host country human resources critically challenge U.S. efforts to implement and expand AIDS treatment, according to agency officials in 23 of our structured interviews as well as key documents we reviewed. Inadequate training; high staff turnover, due in part to low compensation; and national policies and regulations limiting the use and hiring of doctors all contribute to human resource constraints in the PEPFAR countries. U.S. field staff in 18 of 28 structured interviews identified shortages of trained host country personnel, including doctors, nurses, and administrators, as a major limitation to U.S. efforts to expand ARV treatment. In addition, three officials working with the Coordinator’s Office identified the human resource shortage as a critical issue that could impede the success of PEPFAR. Further, an assessment of four AIDS treatment sites in Kenya by Family Health International and Management Sciences for Health found that all sites were operating at half the recommended staffing levels. Multilateral and bilateral organizations have also reported on health personnel shortages. A joint World Bank–WHO paper stated that in many poor countries, the number of health workers is grossly insufficient for the widespread implementation of a minimum of lifesaving interventions, and a separate WHO paper stated that shortages of human resources are a major constraint to expanding HIV/AIDS treatment and care. For example, the size of the health workforce in Tanzania must triple by 2015 to deliver health care, including HIV/AIDS treatment, to the majority of the population, according to a report funded by the United Kingdom Department for International Development. While accurate data are difficult to obtain, WHO data indicate wide variances in the numbers of doctors and nurses in the 14 countries. Even in Botswana, one of the 14 countries reporting the highest number of doctors per capita, field staff reported a shortage of trained doctors who can provide ARV treatment. The country’s president cited human resource constraints as one of the major challenges to introducing ARV treatment in Botswana. Half of the field staff we interviewed said that in the countries where they work, insufficient numbers of personnel are adequately trained to facilitate expansion of ARV treatment. According to a USAID-funded paper, low- quality nursing and medical training schools inhibit the countries’ ability to produce qualified providers. In addition, an HHS/CDC official in one African country cited lack of public health training as a key challenge to expanding AIDS treatment in that country. A Coordinator’s Office official and UNAIDS officials stated that limited human capacity inhibits the ability of PEPFAR countries to monitor and evaluate ARV treatment, and an advisor to a national AIDS program in another African country stated that staff at the national drug procurement center are not properly trained and that as a result, the center has experienced shortages of health supplies. Moreover, donor efforts to improve the skills of health workers through training are not well coordinated, according to USAID and HHS/CDC officials in the field. Lack of coordination results in duplicative training materials or different messages, according to an HHS and WHO official respectively. Further, the World Bank–WHO paper notes that payment of high per diems by donors to ensure attendance at workshops and seminars distracts managers and staff from their work. In addition, the USAID- funded report stated that donors traditionally have focused more on short- term rather than longer-term interventions such as helping to develop and improve medical, nursing, and other technical schools. According to agency field staff and multilateral and other U.S. sources, high turnover of health services personnel is a significant factor contributing to the shortage of health workers in PEPFAR countries, hindering the delivery and expansion of ARV treatment. Seven respondents cited high staff turnover as a challenge, and of these seven, four cited low public sector pay as a factor leading to turnover. Written documents from field staff also stated that low public sector pay contributes to turnover. For example, the USAID-HHS/CDC Fiscal Year 2004 PMTCT Initiative Implementation Plan for Rwanda stated that rapid turnover of personnel, due to noncompetitive public sector salaries, “burnout,” and the loss of trained health-care workers from the public sector, affects the health ministry’s ability to advance programs. Further, the document anticipated that personnel issues will constitute a major challenge to expanding ARV treatment in that country. A USAID-funded study reported that, in some cases, health care providers leave the public sector to earn higher salaries in the private sector or with NGOs. Similarly, the President of Botswana said that the country’s national ARV program lost skilled health and other workers to NGOs and development partners, who pay higher salaries than the government. Three U.S. field staff we spoke with emphasized the need for donors to coordinate on common policies regarding salaries for health workers. Likewise, a World Bank expert and a WHO official suggested that donors should develop policies to supplement salaries for public health workers to help alleviate the shortages. Worker emigration and death from AIDS among health workers also contribute to staff shortages. World Bank and WHO reports noted that low pay and poor working conditions contribute to the migration of skilled health workers from resource-poor countries. WHO reported that one- quarter to two-thirds of health care professionals interviewed in some African countries expressed an intention to emigrate to other countries. The report identified lack of training and career opportunities, poor pay and working conditions, and political conflicts and wars as the main factors leading to emigration. In addition, according to a May 2004 WHO report, AIDS deaths have dramatically increased among the health workforce throughout the developing world. Host governments’ national policies and regulations regarding the use and hiring of doctors limit the number of health services personnel available to provide ARV treatment. For example, U.S. government officials in one country said that a policy requiring that only doctors treat AIDS patients represented the greatest obstacle to expanding treatment. Documentation on the national ARV program in that country recommended devolving responsibility to lower level staff, but mentioned that labor issues could hinder this. In another country, according to a U.S. official, hiring doctors in the public sector can take 6 months to a year. Rapid expansion of treatment has been impeded by host government constraints, including, in some countries, limited political commitment to combating HIV/AIDS, poor delineation of roles among government bodies responsible for addressing HIV/AIDS, and slow decision-making processes, according to 19 of the structured interview respondents and written communications to the Coordinator’s Office from the field. Eleven of the 28 respondents cited lack of political commitment to address HIV/AIDS as a major challenge. According to U.S. officials working in one country, despite proclamations at the highest levels that HIV/AIDS constitutes an emergency, it is not treated as such. They noted that they have great difficulty getting a response from the government, which tends to be slow and bureaucratic, and that the health ministry has never been powerful or well funded. Similarly, USAID officials in another country said that although there are strong leaders at the health ministry’s HIV/AIDS and TB division, weak leadership at higher levels in the ministry has made it difficult to advance programs. A joint U.S. government communication, dated September 18, 2003, from a third country stressed the urgent need for high-level political commitment to assure that ministries provide sufficient oversight and staff for effective programming. Conversely, staff in a fourth country stated that political will to address HIV/AIDS has been demonstrated by the central government but not at the local level, where much of the program implementation will occur. A quarter of the respondents (7 of 28) cited institutional constraints, such as poor delineation of roles between government bodies responsible for addressing HIV/AIDS, as an impediment to expanding treatment. For example, a U.S. official in one country said that the lack of a clear distinction and definition of roles and responsibilities within the ministry of health and weak management structure constrained his efforts to implement the PMTCT Initiative. A U.S. official in another country reported difficulty working with the host government because several different government entities have responsibility for HIV/AIDS, with no clear reporting hierarchy. Further, HHS/CDC officials in a third country voiced concern about friction between the health minister and the AIDS minister regarding the control of money from the World Bank. The HHS/CDC officials are concerned that the disagreement might result in two separate coordinating mechanisms, causing duplication of efforts. Four respondents from our structured interviews cited host governments’ slow decision-making processes as a key challenge to rapidly expanding ARV treatment. For example, according to a U.S. government official in one country, extensive consultation and discussion delayed programmatic and management decisions, slowing implementation of the PMTCT Initiative. Similarly, HHS/CDC officials in another country said that country’s tradition of consensus-based decision-making requires a great deal of consultation and thus inhibits the country’s ability to quickly address situations such as the AIDS epidemic. According to the officials, this slowness was the major challenge in implementing the PMTCT Initiative in that country. However, the officials also stated that consensus-based decision-making reduces opportunities for corruption, a problem reported by U.S. officials in four countries as a challenge to implementing programs. An HHS/CDC official in a third country reported that decision making is slow because several levels of officials have to approve even routine decisions. HHS/CDC and USAID field staff in 16 of 28 structured interviews cited weak infrastructure in host countries as an impediment to implementing and expanding ARV treatment. Specifically, they noted weak systems for gathering information needed to monitor and evaluate programs; inadequate systems for managing the drug supply; poor linkages among HIV/AIDS programs and between these programs and basic health care infrastructure; and insufficient physical infrastructure, including health facilities, roads, and water supply. In 8 of the 28 structured interviews, HHS/CDC and USAID field staff stated that the infrastructure needed for monitoring and evaluating treatment programs is weak. For example, field staff in one country stated that the national AIDS control program’s indicators and data collection methods are not sufficient to identify populations infected with HIV, and staff in a second country said that that inadequate feedback to those who administer services or collect data hampers the improvement of programs. Staff from this country also stated that agencies’ differing methods of reporting activities make determining data accuracy difficult. In addition, U.S. agency documents from PEPFAR countries indicated the need for better data collection tools, feedback of analysis and data to district and community facilities, behavior change to increase the value placed on data, and monitoring of the impact of programs as AIDS treatment expands. A joint WHO–World Bank paper also emphasized the need to improve health information systems at local, national and international levels. Moreover, half or more of the structured interview respondents indicated that they experienced moderate or greater difficulties in harmonizing data collection methods and reporting requirements with other stakeholders involved in AIDS treatment (see app. VII). According to officials from the U.S. government, WHO, and UNAIDS, there is general international consensus on what data should be collected but less consensus regarding how the data should be collected and reported. Eight of 28 interview respondents said that the infrastructure needed to manage and deliver drug supplies in their countries is inadequate, complicating efforts to expand ARV treatment. Respondents in several countries commented on, among other things, the difficulty of maintaining a reliable supply of drugs and basic health commodities; a lack of infrastructure for distributing and storing drugs and other commodities and the absence of a sound commodity management information system; and a protracted ARV shortage that could lead to drug resistance in thousands of affected patients. In one country, fear of being penalized has kept the government’s agency for procuring drugs and related items from sharing information on drug shortages, thereby exacerbating the problem and inhibiting efforts to address it, according to an advisor to the national AIDS program. According to six interview respondents and written communications to the Coordinator’s Office from five countries, poor linkages among programs providing HIV/AIDS services inhibit the expansion of these services. For example, U.S. officials in one country stated that the mechanism for referring patients from sites where they receive counseling and testing to sites where they can receive treatment needs to be improved. In addition, U.S. officials in three other countries stressed the need to link PMTCT and ARV treatment programs to other health services required by patients and their families, such as nutrition and family planning. Poor linkages between donor-supported HIV/AIDS programs and basic health systems may also impair the ability of these systems to continue ARV treatment once donor support is discontinued. According to an expert directing two HIV/AIDS projects in four African countries, unless ARV treatment is linked to investments in sustainable health systems, HIV/AIDS programs can draw resources away from, and thus harm, the overall health sector in recipient countries. For example, U.S. officials in one African country stated that PEPFAR activities could decrease the number of staff, quality of facilities, and availability of drugs for basic health services that are not specifically focused on combating HIV/AIDS. According to our interviews and the documents we reviewed, deteriorated physical infrastructure also constitutes a challenge to expanding ARV treatment programs. Many of the hospitals, clinics, and laboratories in the PEPFAR focus countries—some of which have experienced years of civil strife—are ill equipped to handle expansion of ARV treatment. For example, U.S. officials working in one country said that inadequate health care facilities, including lack of laboratories, hamper the monitoring of ARV treatment. According to a U.S. government communication from Ethiopia dated September 18, 2003, facilities must be refurbished and equipment installed, among other needs, to support the implementation of ARV treatment. A November 4, 2003 summary of a joint U.S. agency discussion in Kenya stated that most health facilities targeted for involvement in treatment activities have physical infrastructure needs that should be addressed, including needs for testing and counseling space, electricity, clean water, air conditioning in pharmacy storerooms to maintain drug quality, and improved laboratory space. Further, the USAID- HHS/CDC Fiscal Year 2004 PMTCT Initiative Implementation Plan for Uganda stated that there is inadequate space for program staff and equipment at the ministry of health and for HIV counseling and testing in prenatal clinics. Multilateral and nongovernmental organizations have also identified weak health care infrastructure as an impediment to expanding ARV treatment. For example, when WHO ranked the overall health system performance of its 191 member states in 2000, it ranked all 14 of the PEPFAR focus countries in the bottom third. In many of these countries, up to one-half of the population lacks access to basic health care and many health facilities lack basic commodities, such as syringes, as well as laboratories and safe drug storage facilities. In addition, limited infrastructure, including roads, a clean water supply, and electricity, presents barriers to expanding ARV treatment. For example, field staff from one country said that deteriorated roads and other basic physical infrastructure pose a major challenge to delivering clinical and diagnostic services. The Office of the U.S. Global AIDS Coordinator has acknowledged each of the five challenges to expanding ARV treatment programs and has taken certain steps to address them, but some of these challenges require additional effort, longer-term solutions, and the support of others involved in providing ARV treatment. First, the Coordinator’s Office has devised means to improve coordination among U.S. agencies and with host governments and other organizations; however, it is too soon to tell whether they will be effective and the PEPFAR strategy does not state whether the means will be monitored. Second, U.S. agencies are exploring ways to address some U.S. government constraints, but the Coordinator’s Office guidance on ARV procurement leaves key problems unresolved. Third, the Coordinator’s Office proposed short-term assistance to address health worker shortages, including the use of paid workers and volunteers from the United States and other countries, and the PEPFAR strategy proposes several longer-term interventions. However, U.S. officials said that using international volunteers for the short-term activities is not cost effective. Fourth, although the Coordinator’s Office has called for stronger commitment by host governments, it has not addressed other, systemic constraints outside its direct authority. Finally, the Coordinator’s Office is taking steps to strengthen systems for monitoring and evaluating progress toward PEPFAR treatment goals and is seeking to involve the private sector in improving the management and supply of drugs. However, some field staff had differing views on implementing a “network model” proposed in the strategy for improving basic health care infrastructure and facilitating treatment referrals. In addition, the Coordinator’s Office has not addressed physical impediments such as lack of space for counseling and testing. The Office of the U.S. Global AIDS Coordinator has acknowledged the importance of coordinating with national governments and other groups and has created mechanisms, such as HIV/AIDS teams led by the ambassador in each country, to enhance U.S. government coordination in the field and with the host government. However, it is too soon to tell whether these mechanisms will resolve the coordination challenges identified by field staff, and the PEPFAR strategy does not state whether the mechanisms will be monitored. Recognizing that providing ARV treatment requires a sustained, collaborative effort from international, national, and local organizations, the PEPFAR strategy outlined an approach to leverage the strengths of each entity while building local capacity. According to the strategy, the Coordinator is expected to maximize U.S. technical assistance, training, and research experience when expanding treatment programs, while working with other stakeholders to leverage strengths and fill program gaps. In tandem with the host governments in the 14 PEPFAR focus countries, the Coordinator is also expected to encourage the development of a single in-country structure to facilitate coordination among donors, the host government, NGOs, and other stakeholders. The increased coordination may also facilitate efforts to harmonize proposal, reporting, surveillance, management, and evaluation procedures to ensure that programs are comparable and complimentary and to decrease the burden on host organizations and governments. The strategy specifies that the Coordinator’s Office will work with technically expert partners, such as WHO, to determine the best treatment options and ensure that there are sound management strategies in place to support them. Finally, the Coordinator will encourage stakeholders to work through local partners and promote programs that support the countries’ national strategies. In addition, the Coordinator has worked to establish relationships with international counterparts, meeting with the leadership of WHO, UNAIDS, the World Bank, and the Global Fund. The Coordinator, together with the HHS Secretary, also led a delegation of representatives from the administration, the Congress, WHO, UNAIDS, the Global Fund, and numerous private entities and NGOs to meet with leaders and view ARV treatment and other HIV/AIDS-related programs in four African nations in December 2003. To ensure that U.S. efforts in the field are coordinated, and to enhance relationships with the host government, the Coordinator has directed that an HIV/AIDS team, led by the Ambassador, be set up in each country. These teams may also have an official designated by the Ambassador to serve as the day-to-day liaison. The teams are generally comprised of representatives of each of the agencies working on HIV/AIDS-related projects in a given country. According to the field staff we interviewed, these teams have already been set up in most countries, and some countries had already established HIV/AIDS teams that will now focus on PEPFAR. Also, to improve coordination between headquarters and the field, the Coordinator’s Office sought input from field staff by requesting written documents and by conducting an intensive series of meetings with field staff over a 2-week period in November 2003. However, it is too soon to tell whether these mechanisms will be effective in resolving the coordination challenges field staff identified. The Office of the U.S. Global AIDS Coordinator, together with the agencies implementing PEPFAR, is exploring options for addressing U.S. government constraints involving (1) contracting capacity in the field; (2) differing laws and regulations governing funds appropriated to implementing agencies, in particular, USAID and HHS/CDC, with respect to procurement and foreign taxation of goods purchased with U.S. assistance; and (3) differing agency requirements for auditing non-U.S. grantees. In addition, the Coordinator’s Office has provided guidance to the field on ARV procurement. However, this guidance leaves key issues unresolved. The Coordinator’s Office and PEPFAR agencies are exploring ways to enhance contracting capacity in the field and to address differing laws, regulations, and audit requirements that may affect their joint efforts to expand ARV treatment programs. While no specific options have been proposed to date, the Coordinator’s Office has directed USAID to develop a request for proposals to design and implement a mechanism for procuring, distributing, and managing the supply of drugs and other items. All PEPFAR agencies and possibly other, non-U.S., stakeholders would use this mechanism as well. As a joint mechanism, it may address some of the contracting capacity needs raised by field staff, as well as the differing agency regulations pertaining to procurement. Guidelines on procurement released by the Coordinator’s Office on March 24, 2004, note that U.S. agencies involved in PEPFAR have different limitations on their ability to procure goods and services from outside the United States and that the office is reviewing options for addressing this issue. The guidelines state that the office will provide additional guidance in the future, although no specific time frame is given. Regarding foreign taxation of goods bought with U.S. assistance, the PEPFAR strategy states that tariffs and duties on pharmaceuticals are “barriers” that can increase the cost of drugs in developing countries and “work at cross purposes” with initiatives to improve access to medicines. According to officials from the Coordinator’s Office, legal experts from the State Department and other PEPFAR agencies are discussing how to address differing agency appropriations laws regarding this issue. In addition, audit officials from USAID and HHS are discussing how to address differing agency requirements for auditing non-U.S. grantees. The Coordinator’s Office provided guidance to U.S. field staff on ARV procurement, but this guidance did not resolve the following issues regarding the use of PEPFAR funds to purchase these drugs: (1) The policy of the Coordinator’s Office on procuring ARVs may change in the future. (2) The Coordinator’s Office does not define how PEPFAR activities and funding can support host country treatment sites that do use generics. (3) In at least one country, the office’s current ARV procurement policy conflicts with PEPFAR’s stated principle of providing assistance in a manner consistent with host country plans and policies. Coordinator’s Office Provided Guidance on ARV Procurement The Coordinator’s Office issued guidance to field staff on ARV procurement over a 5-month period (November 2003–March 2004) in an ad hoc, question-and-answer format in response to inquiries from the field (see table 1). This guidance was issued before, during, and after our structured interviews. According to officials from the Coordinator’s Office, they also addressed questions from field staff during 2 weeks of intensive meetings in Washington, D.C., in November 2003 and during visits to the PEPFAR focus countries over the next several months. However, the Coordinator’s Office provided the most detailed guidance more than 2 months after a January 19, 2004, deadline for obligating initial funds and just one week before field staff in each country were required to submit their operational plans for fiscal year 2004. As noted previously, the Coordinator’s current policy is to fund only the purchase of drugs that have been approved by entities it defines as stringent regulatory authorities, citing concerns about safety and efficacy. The Coordinator’s Office convened a meeting with international regulators in March 2004 to develop principles for evaluating the safety and efficacy of FDCs. In addition, it has directed HHS/CDC to develop a request for proposals to assure the quality of drugs and other products procured with PEPFAR funds. On May 16, 2004, the HHS Secretary announced an expedited process for reviewing data submitted to the HHS/FDA on the safety, efficacy, and quality of generic and other ARV drugs, including FDCs, intended for use under PEPFAR. Drugs approved under this process can then be purchased with PEPFAR funds provided that international patent agreements and local government policies allow their purchase, according to the Coordinator’s Office, HHS, and USAID. Guidance from Coordinator’s Office Does Not Resolve All Issues The ARV procurement guidance provided by the Coordinator’s Office did not resolve all issues regarding the use of PEPFAR funds to purchase these drugs. While the guidance clearly stated that no PEPFAR funds could be used to purchase drugs that have not been approved by entities the office defines as stringent regulatory authorities, the PEPFAR strategy leaves open the possibility that funds could in the future be used to procure generic ARV drugs, including FDCs, provided they meet safety and efficacy standards agreed to by the office. Moreover, the strategy endorses the selection of products such as FDCs, which combine several active ingredients. An April 8, 2004, press release from HHS elaborates that combination therapies, including FDCs, are considered by many to be essential to treating diseases like HIV/AIDS as well as to limiting the development of drug resistance. The press release states that, among other advantages, FDCs simplify dosing, which could result in better patient adherence to therapy. In addition, the ARV procurement guidance issued by the Coordinator’s Office does not define how PEPFAR activities and funding can support host country treatment sites that do use generics. The March 24, 2004, guidance acknowledged that many countries’ treatment guidelines include FDCs and other drugs that have not been approved by stringent regulatory authorities. PEPFAR funds therefore cannot be used to purchase these products or build logistical systems that support only these products but can be used to “provide other support” to treatment sites that use them. Further, in at least one country, the office’s current policy, which in effect does not allow the purchase of generics, conflicts with PEPFAR’s stated principle of providing assistance in a manner consistent with host country plans and policies. An inquiry from Kenya cited by the Coordinator’s Office in its February 20, 2004, response states that the country’s first line treatment, at both government and faith-based or private sector facilities, relies on FDCs “for reasons of economics, pill burden, and other factors.” The inquiry urgently requested clarification from the Coordinator’s Office, stating that a decision on whether FDCs and other generics can be purchased will profoundly affect the extent to which the Kenya mission “must develop parallel rather than integrated systems” and the level of resources needed to reach treatment targets under PEPFAR. Other major donors such as the Global Fund—to which the United States is one of the largest contributors and for which the HHS Secretary currently serves as the Chairman of the Board—allow their funds to be used for purchasing generic ARV drugs, including FDCs. The Coordinator’s Office will focus on both short- and long-term interventions to address host country human resource shortages, which it has identified as a critical limitation to implementing its treatment goals. In the short term, the office will focus on rapidly expanding and mobilizing health care personnel through interventions that include the use of paid workers, international volunteers, training, and technical assistance to meet treatment goals under PEPFAR. However, in June 2003, U.S. government officials documented their concerns about the use of international volunteers for some of these activities. The PEPFAR strategy also identified longer-term interventions that should be considered by host governments and other donors, and the Coordinator’s Office is initiating discussions with these groups to explore options for implementing longer-term interventions. The Coordinator’s Office will respond to immediate needs to increase manpower through several short-term interventions, including the use of international volunteer health professionals, but field staff expressed concern that this intervention will generate other problems. In addition to using volunteers, U.S. efforts will focus on training existing providers in case management for ARV treatment and providing technical assistance through arrangements that include “twinning”—pairing health facilities in the PEPFAR focus countries with organizations in the United States and other countries—to provide training and technical assistance, according to the PEPFAR strategy. The Coordinator’s Office will also support host country efforts to depend less on the scarce supply of skilled health workers by extending responsibility for patient treatment to nurses, counselors, and health volunteers, as well as exploring options to involve traditional healers, birth attendants, and family members in treatment and care. The Coordinator characterized the human resource shortage as the second most important issue after political leadership in addressing HIV/AIDS. Accordingly, Coordinator’s Office officials stated that all contracts and contract renewals include language on developing local human resource capacity. However, USAID and HHS/CDC field officials informed the Coordinator’s Office of potential problems associated with using international volunteers to address health worker shortages and training. Specifically, the use of such volunteers for short overseas tours creates heavy administrative burdens, may not be sustainable over the long term, and is not cost effective, according to a June 2003 communication summarizing lessons learned from the PMTCT Initiative. The communication recommended that tours be for a minimum of one year. In addition, regarding twinning, a USAID official in one country stated that the ministry of health raised concerns over the time involved in training international volunteers and that twinning will not address issues such as attracting and enrolling nurses who will stay in the country, particularly in rural areas. Despite its attention to training and technical assistance, the strategy does not discuss the extent to which the Coordinator’s Office will collaborate with other donors on training to minimize duplicative sessions and workplace disruptions when staff attend training. The PEPFAR strategy outlines longer-term interventions to stem the critical human resource shortage in the 14 countries, emphasizing actions that host governments can take on their own or in discussion with other donors. These include increasing the quality and number of graduates from medical and related professional schools, improving retention of the health sector workforce through salary increases and other incentives, and establishing bilateral and international agreements to resolve salary differentials. The June 2003 communication emphasized the need for guidance on the extent to which U.S. agencies will supplement the salaries of government health- care workers in PEPFAR focus countries in order to retain qualified employees and implement activities under PEPFAR. According to an official in the Coordinator’s Office, the office is developing a policy statement on the use of PEPFAR resources for salaries. This official stated that the Coordinator’s Office plans to work with other donors, including the World Bank, to support long-term interventions such as supplementing salaries and building and strengthening professional schools. The Coordinator’s Office is engaged in frequent meetings with the 3-by-5 team at WHO and has met with officials at the World Bank and UNAIDS to discuss a coordinated approach to human capacity development. An interagency group formed under the PMTCT Initiative is also contributing to these efforts. According to an expert at the World Bank, donors should help finance host countries’ efforts to address human resource issues. Because PEPFAR will play a central role in its focus countries, a WHO official stated that other donors will look to the United States to address long-term interventions to issues faced by host country governments. An October 2003 document from U.S. field staff in one African country also raised the importance of U.S. government support for salaries for government workers in the national health system, adding that the national government cannot afford to pay for significant numbers of new staff. The Coordinator’s Office called on U.S. officials, including ambassadors, to advocate for bold leadership to fight HIV/AIDS and identified mechanisms for fostering political commitment and reaching out to all groups involved in combating the disease in recipient countries. The Coordinator’s Office has not begun to work with other stakeholders to address other, more systemic host government constraints that U.S. field staff identified. Recognizing that containment of HIV/AIDS requires bold leadership and political commitment, the PEPFAR strategy calls for high-level officials in Washington and American ambassadors abroad to encourage commitment from heads of state and other government leaders. The strategy emphasizes that American embassy staff must be informed and engaged on the issue of HIV/AIDS in their host countries and asks them to raise the issue in host government forums. On November 26, 2003, the Global AIDS Coordinator sent a communication to embassies in the PEPFAR focus countries that summarized points for building support at the country level. For example, the communication requested that all chiefs of mission brief host government leaders on PEPFAR in order to build their support for the program and establish a process whereby U.S. field staff, along with host government officials and other stakeholders, can rapidly begin to design and implement PEPFAR. However, these efforts were hindered by the fast pace of PEPFAR, which, as previously discussed, made it difficult for field staff to consult with host governments. The PEPFAR strategy looks to a broad range of community leaders and private institutions to generate leadership and fight the stigma associated with HIV/AIDS. It calls for using public-private partnerships at local, national, regional, and international levels to strengthen global and in- country responses to HIV/AIDS. For example, the strategy states that the United States will engage community leaders such as mayors, tribal authorities, elders, and traditional healers to promote correct and consistent information about the epidemic and to combat stigma and harmful cultural practices. In addition, it commits to working with faith- based leaders and joint national and international business and labor coalitions to facilitate efforts to improve and expand programs in the workplace and take advantage of marketing, communications, and logistical skills to improve the reach and effectiveness of AIDS programs. The strategy also calls on U.S. officials to advocate for a greater global response through multilateral forums such as UNAIDS, international conferences, and participation in the Global Fund. Neither the PEPFAR strategy nor the Coordinator’s Office addresses other host government constraints raised by our interview respondents, including the poor delineation of roles between government bodies responsible for combating HIV/AIDS and slow decision-making processes, that are outside the Coordinator’s control and will take additional time to resolve. The Coordinator’s Office has taken several steps to improve the infrastructure needed to support expansion of ARV treatment; however, some field staff expressed differing views on implementing a proposed tiered system of health care. In response to the PEPFAR strategy’s emphasis on results-driven interventions, the Coordinator’s Office is working to strengthen systems to monitor and evaluate progress toward treatment goals. In addition, the Coordinator’s Office seeks to improve countries’ abilities to manage the drug supply in the short run by, among other things, calling on the private sector to help with distribution. The new procurement mechanism (see p. 34) is also meant to address these issues. Consistent with the U.S. Leadership Act authorizing PEPFAR, the strategy proposes the use of a “network model” of health care facilities to provide a high volume and level of services in central medical centers and more basic services in outlying areas to enhance access to ARV treatment. However, some field staff expressed differing views on this model. Neither the strategy nor the Coordinator’s Office addresses certain physical infrastructure impediments raised in documents submitted to the Coordinator or by our interview respondents. To support the effective gathering and reporting of information to monitor and evaluate progress toward PEPFAR goals, the Coordinator’s Office will support training to improve and expand recipient countries’ surveillance and laboratory capacity. The office will provide assistance to countries for improved information gathering and reporting to measure progress in reaching program goals. These indicators measure the numbers of facilities supported, practicing professionals and community workers trained, and clients reached. The Coordinator’s Office worked with officials from HHS, the U.S. Census Bureau, USAID, other U.S. agencies, UNAIDS, WHO, and the Global Fund, to assess new data needs and minimize duplicative data collection. The Coordinator’s Office developed HIV/AIDS-specific coding categories to gather information for a number of activities, including (1) preventing HIV transmission from mothers to babies, (2) other HIV prevention activities, (3) treatment, (4) care, and (5) assessing laboratory infrastructure needs. For example, to gather information for ARV treatment, the Coordinator’s Office developed a facility checklist to assess delivery of treatment, including eligibility criteria for patients, clinical monitoring and lab tests offered, standard operating procedures and protocols, and record keeping. The Coordinator’s Office is working with the Global Fund and other organizations to synchronize systems for monitoring and evaluating HIV/AIDS programs. According to the office, U.S. officials have met with officials from UNAIDS, the World Bank, the Global Fund, and WHO to discuss developing common indicators and guidelines for paper-based or electronic tracking. To assist U.S. field staff in planning and monitoring treatment programs and report on PEPFAR progress, the office has established the following indicators for monitoring and evaluating ARV treatment: the number of facilities, programs, or both, including a separate breakout of the number of faith-based facilities or programs; the number of clients served; the number of new clients served; the number of clients continuously receiving treatment and related services for more than 12 months; and the number of people trained. To measure progress toward the overall PEPFAR goal of providing ARV treatment to 2 million people by the end of 2008, field staff in each of the focus countries will report semiannually to the Coordinator’s Office on the number of people receiving ARV drugs through PEPFAR. According to the Coordinator’s Office, data will be collected and stored in an electronic repository that is expected to be operational in September 2004. Twice a year, U.S. field staff will electronically transmit data measuring the progress of PEPFAR activities to the Coordinator’s Office. According to the office officials, the office will put the information in a database that field staff and multilateral organization can access. Because fully equipped laboratories are necessary for monitoring ARV treatment to limit the development of resistant strains of the virus, the Coordinator’s Office will fund assessments of existing laboratory infrastructure and will fund upgrades of laboratories, as needed. In addition, the Coordinator’s Office will support the development, adaptation, and translation of training materials for specimen collection, storage, shipment, testing, and record keeping. The PEPFAR strategy recognizes that the sharp increase in the volume of products to be provided under the program and from other sources such as the Global Fund may challenge existing national supply systems. Accordingly, as noted on p. 34, the Coordinator’s Office is developing a request for proposals to design and implement a joint procurement mechanism to better manage the supply of drugs and other products. The strategy calls for training personnel in health logistics systems and supporting efforts to minimize drug diversion, counterfeiting and waste. It also states that the United States will collaborate with other donors to minimize distribution gaps. To accomplish its objectives in the short run, the Coordinator’s Office will call on the private sector to perform some logistics functions, such as building up distribution and information management systems and improving storage conditions. For example, PEPFAR agencies will provide technical assistance and fund training to strengthen procurement and distribution systems. By increasing the number of people trained in procurement and distribution, PEPFAR seeks to improve local capacity to negotiate, purchase, manage, and supply goods. However, the implementation of this objective may face the same human resource constraints noted previously, due to the limited number of available workers. Consistent with the U.S. Leadership Act authorizing PEPFAR, the PEPFAR strategy proposes a tiered model for providing treatment; however, some field staff expressed differing views on implementing this model. According to the strategy, this “network model” integrates prevention, treatment, and care activities through a layered system of central facilities that support satellite centers and mobile units to reach the most rural areas. It comprises central medical facilities, regional and district-level facilities, and community clinics. A September 18, 2003 communication to the Coordinator from U.S. field staff in Ethiopia stated that the model is appropriate in that country, and that current HHS/CDC and USAID planning for PEPFAR in Ethiopia uses the model. In addition, an October 28, 2003 communication from Mozambique stated that the country has developed an integrated health network with levels of supervision and referral that correspond to the model. However, field staff in Uganda, the country often cited by U.S. government headquarters officials as having a successful model, stated in a written communication to the Coordinator dated October 8, 2003, that the model is not fully operational in Uganda owing to the same host country constraints that many resource-poor countries face. According to the communication, weak or nonexistent infrastructure, limited human and financial resources, and poor training constrain the model at all levels. Although the PEPFAR strategy acknowledges that many of the affected countries lack the necessary health infrastructure needed for effective HIV/AIDS treatment, it does not address certain physical impediments raised by U.S. government field staff, such as inadequate space for HIV counseling and testing in prenatal clinics and other medical facilities. While the strategy recognizes that lack of basic amenities such as clean water is a barrier to successful treatment, it does not discuss how to address this issue. In addition, it does not discuss the impact of deteriorating roads, which affect the delivery of drugs and other commodities. Clean water, passable roads, and other basic infrastructure are outside the direct authority of the Coordinator’s Office. The Office of the U.S. Global AIDS Coordinator faces five key challenges as it leads U.S. efforts to significantly expand ARV treatment in the 14 PEPFAR focus countries. Certain key challenges, such as the shortage of trained health workers, limited commitment of some host governments, and weak infrastructure require long-term solutions and the support of host governments, donors, and other organizations providing ARV treatment. Other challenges are within the control of the U.S. government, and the Coordinator’s Office has begun to (1) take steps to facilitate host government participation in planning PEPFAR activities and (2) explore ways to enhance U.S. contracting capacity in the field and address differing laws, regulations, and requirements applicable to the agencies implementing PEPFAR. In addition, HHS, with the support of the Coordinator’s Office, recently announced an expedited review process for generic and other ARV drugs, including FDCs, which could be procured with PEPFAR funds. However, the Coordinator’s Office has not specified the activities that PEPFAR can fund and support in national treatment programs that use ARV drugs not approved for purchase by the office. Given the importance of these challenges to expanding ARV treatment, it is critical that the Coordinator’s Office ensure that the issues reach full and timely resolution. To improve the U.S. Global AIDS Coordinator’s ability to address challenges in expanding AIDS treatment in PEPFAR focus countries, we recommend that the Secretary of State direct the Coordinator to monitor implementing agencies’ efforts to coordinate PEPFAR activities with stakeholders involved in ARV treatment, including taking adequate steps to actively solicit the input of host government officials and respond to their input; collaborate with the Administrator of USAID and the Secretary of HHS to address contracting capacity constraints in the field and resolve any negative effects resulting from the differing laws governing the funds appropriated to these agencies in the areas of procurement and foreign taxation of U.S. assistance, as well as differing requirements for auditing non-U.S. grantees; specify the activities that PEPFAR can fund and support in national treatment programs that use ARV drugs not approved for purchase by the Coordinator’s Office; and work with national governments and international partners to address the underlying economic and policy factors creating the crisis in human resources for health care. The State Department, HHS, and USAID provided combined written comments on a draft of this report (see app. VIII for a reprint of their comments). The agencies concurred with the report’s overall conclusion that while they have addressed a number of key challenges in providing services, other challenges remain for the medium and long term. The agencies did not specifically comment on GAO’s recommendations; however, they noted that program efforts and activities have progressed beyond what the report describes, and work is underway to address the majority of challenges and issues raised. Some of these efforts reflect our recommendations. The agencies also provided technical comments that we have incorporated as appropriate. Our draft report contained the first 3 recommendations. We added the fourth recommendation in light of additional information State, HHS, and USAID provided when they commented on a draft of this report. This information reemphasized the need for these agencies to engage in efforts to address the critical shortage of health workers in recipient countries. We are sending copies of this report to the U.S. Global AIDS Coordinator, the Secretary of HHS, the Administrator of USAID, and interested congressional committees. Copies of this report will also be made available to other interested parties on request. In addition, this report will be made available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149. Other GAO contacts and staff acknowledgments are listed in appendix IX. The Chairman of the Subcommittee on Foreign Operations, Export Financing, and Related Programs of the House Committee on Appropriations asked us to (1) identify major challenges to U.S. efforts to expand antiretroviral (ARV) treatment in resource-poor settings and (2) assess the U.S. Global AIDS Coordinator’s response to these challenges. Our work focused on the 14 countries targeted under the President’s Emergency Plan for AIDS Relief (PEPFAR): Botswana, Côte d’Ivoire, Ethiopia, Guyana, Haiti, Kenya, Mozambique, Namibia, Nigeria, Rwanda, South Africa, Tanzania, Uganda, and Zambia. To identify challenges to U.S. efforts to expand ARV treatment, we conducted 28 structured telephone interviews in December 2003 and January 2004 with key staff from the U.S. Agency for International Development (USAID) and the Department of Health and Human Services’ Centers for Disease Control and Prevention (HHS/CDC) responsible for implementing HIV/AIDS programs in the 14 targeted countries. To ensure balance, we conducted one USAID and one HHS/CDC interview in each country. We coded the responses to our open-ended interview questions using a set of internally developed analytical categories. Our structured interview document contained 16 questions on the implementation and expansion of HIV/AIDS treatment programs, including program activities and coordination and management challenges (see app. II). To develop the questions and further assess challenges, we reviewed numerous documents analyzing treatment programs from U.S. government agencies, U.N. organizations, and nongovernmental organizations (NGO), including reports by medical experts and practitioners. We also interviewed U.S.-based officials from USAID and HHS; representatives from multilateral organizations, including the World Health Organization (WHO), the United Nations Joint Program on HIV/AIDS (UNAIDS), the World Bank, and the Global Fund to Fight AIDS, TB, and Malaria (Global Fund); and medical experts experienced in treating people with HIV/AIDS in resource-poor settings. We traveled to Geneva, Switzerland, to meet with WHO, Global Fund, and UNAIDS representatives, and to Paris, France, to meet with program experts from Médecins sans Frontières (Doctors Without Borders), an NGO providing ARV and other AIDS treatment in resource-poor countries. Most of the structured interview questions were open ended; two were closed ended (see app. II for a list of the questions). Experts reviewed initial versions of our open- and close-ended questions and four of our initial respondents pretested the questions. We refined our questions based on their input. To summarize the open-ended responses, we systematically coded a set of key questions on challenges to coordination and program expansion from our structured interviews. We grouped the responses into five major challenge categories. As in any exercise of this type, the categories developed can vary when produced by different analysts. To address this, two GAO analysts reviewed the responses to the key questions from five interviews and independently proposed categories, separately identifying major challenges and then agreeing on a common set of challenges. They independently analyzed and differentiated responses into subcategories within each major challenge area and then agreed on a common set of subcategories. We refined these subcategories during the coding exercise that followed. Interview responses falling into a specific subcategory often derived from a variety of questions in our analysis; there was not a one-to- one correspondence between questions and categories. We then analyzed applicable statements from each of the 28 interviews and placed them into one or more of the resulting subcategories. Four GAO analysts each examined 7 of the 28 interviews. One analyst made some adjustments in placements to ensure consistency in coding and then compiled the resulting placements into a single master document. The analyst then summarized and tallied the number of respondents providing information in each subcategory. Two GAO analysts then independently reviewed the interview analysis document. All disagreements regarding the placement of responses into subcategories were discussed and reconciled. Figure 4 presents the numbers of respondents citing challenges in each of the five major categories, and figures 8 through 12 present the breakout of each major challenge into subcategories. These figures show subcategories containing information from 3 or more respondents; we also cite in footnotes other information provided by only 1 or 2 respondents. We explicitly prompted respondents with questions on coordination issues. We identified the other four major challenges during our analysis of the responses to the coded questions. As a result, the number of respondents providing information on coordination challenges is higher than the number providing information on the other four challenges. We conducted a separate analysis of the two closed-ended questions, which asked respondents to rank the degree of difficulty coordinating with various groups (question 12.b), and coordinating with all parties on specific activities (question 13.b). (See app. VII.) Finally, to expand on the structured interviews, we reviewed relevant U.S. laws, regulations, and policies governing procurement, contracting, taxation, and auditing; documents that field representatives prepared for the Coordinator’s Office; and documents from multilateral organizations and NGOs. We also interviewed U.S.-based officials from the Coordinator’s Office, USAID, and HHS. To assess the Global AIDS Coordinator’s response to these challenges, we reviewed The President's Emergency Plan for AIDS Relief: U.S. Five Year Global HIV/AIDS Strategy (February 2004); administration guidance, including several communications to the field on ARV procurement; and information on the emerging structure and initial activities of the Coordinator’s Office. We also interviewed officials from the Coordinator’s Office, USAID, and HHS. We conducted our work from July 2003 through May 2004, in accordance with generally accepted government auditing standards. The following questions are to assist the U.S. General Accounting Office to gather information on how USAID missions and HHS/CDC field offices coordinate the implementation and scale up of ARV treatment programs in the field. Specifically, we are looking to understand how your agency coordinates with other U.S. government agencies and other key stakeholders (multilateral, other bilateral, host government, nongovernmental) to identify the challenges to these coordination efforts, and to obtain lessons learned that can inform the President’s Emergency Plan for AIDS Relief. For questions 2-5, please refer to appropriate documents. Where asked, please indicate the name of the document(s) you used to answer these questions. last 12 months) to date) We are interested in the PMTCT, PMTCT Plus, and other ARV programs. Which of these programs does your mission/field office support? Approximately how many people are currently receiving these services in your country? Please indicate whether the numbers in the PMTCT Plus column are included in the ARV treatment column. Please provide the name of the document(s) you used to obtain the data for each of these services. Please indicate if the available data are inadequate to answer the question for any of these services. Of the number in 2.a., how many are being supported by U.S. government programs? Please provide the name of the document(s) you used to obtain the data for each of these services. Please indicate if the available data are inadequate to answer the question for any of these services. Over the next 6-12 months, how many people in your country do you realistically expect to start treatment? Please provide the name of the document(s) you used to obtain the data for each of these services. Please indicate if the available data are inadequate to answer the question for any of these services. Of the number in 4.a., how many will be supported by U.S. government programs? Please provide the name of the document(s) you used to obtain the data for each of these services. Please indicate if the available data are inadequate to answer the question for any of these services. 6.a. Please look at the list of program activities related to PMTCT, PMTCT Plus, and ARV treatment that we sent to you. In which of these program activities is your mission/field office involved? Indicate which of these activities are directly funded by your mission/field office. Training (of doctors, nurses, healthcare workers and administrators) 6.b. I’m going to read out a list of items and services related to ARV treatment. Does your mission/field office procure any of them? diagnostics (e.g., test kits, including rapid test kits) lab equipment and commodities (e.g., reagents) 6.c. What types of program activities (listed in 6.a.) and procurement activities (just discussed) is your mission/field office best suited to perform? 6.d. With which of these activities do you face the greatest challenges to implementation? 6.e. What do you see as a feasible solution to these challenges? 7. How do you program resources according to congressional earmarks? Given the earmarks in the authorizing legislation for the President’s Emergency Plan for AIDS Relief (55% for treatment, of which 75% is to be spent on ARV drugs), do you have to make major changes in your programs to accommodate these earmarks? 8.a. Has a point of contact for the President’s Emergency Plan for AIDS Relief been designated in your country? If so, is this contact at the U.S. Embassy? If not, at which agency? 8.b. What other U.S. government agencies does your mission/field office work or coordinate with on VCT, PMTCT, PMTCT Plus, and/or other ARV treatment programs? Please identify the program activities that these agencies perform. 8.c. How does your mission/field office currently coordinate with these agencies? (Please tell us about all formal and informal coordination mechanisms, such as regular meetings, procedures for information sharing, MOUs, TORs, informal contacts, etc.) 8.d. Are there any plans to change the method of coordination? 9. Please describe the key challenges your mission/field office has faced coordinating with other U.S. agencies on VCT, PMTCT, PMTCT Plus, and/or other ARV treatment. Please provide examples of the consequences of these challenges. Coordinating with multilateral organizations (World Bank, Global Fund, UN organizations) 12.c. If you have not already addressed this issue in question 12.a., with which type of partner do you experience the most coordination challenges? Please explain. 13.a. Based on our research to date, we have identified certain function- related coordination challenges that may arise among stakeholders in a given country: harmonization of monitoring and evaluation indicators harmonization of data collection methods harmonization of data reporting requirements harmonization of feedback to those who administer services and/or collect data Are there any other functional areas that you think raise or may raise significant coordination challenges? 13.b. Based on your experience at your current post, please rate the extent to which you experience difficulties coordinating with other partners in the following areas: 13.c. If you have not already addressed this issue in question 12.a. or 13.a., with which area do you experience the most coordination challenges? Please explain. 14.a. What activities did your mission/field office initiate with funding from the PMTCT Initiative? 14.b. What were the key challenges you faced on the PMTCT Initiative and what were the lessons learned that can inform the implementation of PEPFAR? 15. Could you please tell us about a successful ARV treatment program in the country where you serve? What factors contribute to its success? Could you please provide contacts (phone, email address) with whom we can follow up, if necessary? 16. What changes—if any—would you suggest be made to facilitate interagency and international coordination in scaling up ARV treatment? With the advent of PEPFAR, U.S. proposed funding for HIV/AIDS-related activities in the 14 focus countries increased substantially, as shown in figure 5. The Office of the U.S. Global AIDS Coordinator was organized to manage U.S. policies and programs to combat the global AIDS epidemic and to support administrative, communications, and diplomatic efforts. To accomplish this mission, the office has eight specialized units (see fig. 7). Management Services—provides administrative support to the office, including human resources, information management, and operational budget. Communications—plans and implements all communications support for PEPFAR activities while promoting the involvement of public and private organizations. Diplomatic Liaison—prepares strategic plans, conducts activities to promote international involvement, and coordinates international response on HIV/AIDS by working with non-U.S. stakeholders. Training and Human Resources—oversees human capacity and development activities and develops, implements, and monitors training programs. Program Services—develops and monitors the 14 countries’ PEPFAR implementation plans and provides technical and clinical support to the focus countries and for all other activities conducted by the Global AIDS Coordinator. Monitoring, Evaluation, and Strategic Information—evaluates progress toward PEPFAR goals and the impact of PEPFAR activities; works with the international community to harmonize information collection and serves as the liaison to both the research community and the research and information divisions of implementing agencies. Government Relations—responds to congressional requests for information, communicates policy to the Congress, and prepares congressional reports and compliance documents. Budget and Appropriations—develops the annual program budget for the Coordinator’s Office and serves as the liaison to the White House, administrative departments and agencies, and the field on program budget issues, including disbursement, tracking, and reporting. As of June 25, 2004, 69 percent of the positions shown in figure 7 were staffed. Positions within the Coordinator’s Office are filled with a combination of permanent hires and individuals on reimbursable and nonreimbursable detail from other sections of the State Department or other agencies. The Office of the U.S. Global AIDS Coordinator reported that, together with USAID and HHS, it had obligated a total of $346.9 million in PEPFAR funds as of March 31, 2004. These funds were obligated by means of tracks 1 and 1.5 through many awards to implementing entities in the 14 focus countries for activities related to HIV/AIDS treatment, prevention, and care, as follows. Track 1 provided rapid funding to organizations such as U.S.-based NGOs that can respond quickly in more than one country. As of March 31, 2004, the Coordinator’s Office had awarded a total of $114.7 million in five areas: (1) modifying behavior by encouraging abstinence and faithfulness ($4.9 million obligated by USAID); (2) providing care for AIDS orphans and vulnerable children ($4.7 million obligated by USAID); (3) providing ARV therapy for those infected with HIV ($92 million obligated by HHS); and (4) preventing HIV transmission through safe medical injection ($13.1 million obligated by USAID and HHS). Track 1.5 provided rapid funding to programs run by organizations in individual countries. USAID and HHS obligated a total of $232 million under track 1.5 for all 14 countries combined as of March 31, 2004. Like track 1 funding, this funding was to continue and expand ongoing activities. When allocating funding under track 1.5, U.S. missions were encouraged to consider programs that build on the PMTCT Initiative, in particular those that expand treatment to cover mothers and their partners. Track 2 provides funding for each country’s first annual operational plan. The Coordinator will assess annual funding levels in consultation with the U.S. agencies and Chiefs of Mission in each country and release funds after approving each country’s plan. According to guidance provided by the Coordinator’s Office, these assessments are meant to ensure that U.S. agencies in each country are leveraging their strengths and coordinating their efforts. As of May 31, 2004, the Coordinator’s Office had approved 14 countries’ operational plans totaling $589,401,340. Figures 8 through 12 provide more information on the challenges that 28 respondents in the field identified during the structured interviews. To generate these figures, we separately analyzed responses in each of the five main challenge categories and placed them in specific subcategories within each challenge category. We then tallied the number of respondents in each of the subcategories to generate figures 8 through 12. Many respondents reported challenges in more than one category or subcategory. Our structured interview analysis contained two closed-ended questions that asked respondents to rank the difficulty of (1) coordinating with various groups and (2) coordinating with all parties on specific activities (see questions 12.b and 13.b in app. II). When asked to rank the difficulty of coordinating with various groups, 15 respondents indicated that they experienced at least moderate difficulty coordinating with the host government in the country where they serve, and 13 reported the same level of difficulty coordinating with multilateral entities, such as the World Bank and U.N. organizations (see table 2). By comparison, only 2 respondents stated they had at least moderate difficulty coordinating with other U.S. government entities. The majority of respondents reported only a minimal degree of difficulty (“some or little extent” or “no extent”) coordinating with other bilateral donors, NGOs, and the private sector. Respondents said that the difficulty coordinating with nongovernmental and private organizations was that they are so numerous and not all are known. Question 12.b: Based on your experience at your current post, please rate the extent to which you experience difficulties coordinating with the following partners: Regarding coordination on specific activities, 16 respondents reported moderate or greater difficulty coordinating provision of feedback to those who administer services or collect data, and 15 reported a similar degree of difficulty in coordinating procurement policies and data reporting requirements (see table 3). Half of the 26 respondents who answered this question reported moderate or greater difficulty coordinating data collection methods. The majority reported little or no difficulty coordinating treatment protocols or data to be collected. In addition to the person named above, Kate Blumenreich, Martin de Alteriis, David Dornisch, Kay Halpern, Reid Lowe, Rebecca L. Medina, Mary Moutsos, and Tom Zingale made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The President's Emergency Plan for AIDS Relief (PEPFAR), announced January 2003, aims to provide 2 million people with anti-retroviral (ARV) treatment in 14 of the world's most severely affected countries. In May 2003 legislation established the position of the U.S. Global AIDS Coordinator in the State Department. GAO was asked to (1) identify major challenges to U.S. efforts to expand ARV treatment in resource-poor settings and (2) assess the Global AIDS Coordinator's response to these challenges. GAO interviewed 28 field staff from the U.S. Agency for International Development (USAID) and the Department of Health and Human Services (HHS), who most frequently cited the following five challenges to implementing and expanding ARV treatment in resource-poor settings: (1) coordination difficulties among both U.S. and non-U.S. entities; (2) U.S. government policy constraints; (3) shortages of qualified host country health workers; (4) host government constraints; and (5) weak infrastructure, including data collection and reporting systems and drug supply systems. These challenges were also highlighted by numerous experts GAO interviewed and in documents GAO reviewed. Although the Global AIDS Coordinator's Office has begun to address these challenges, resolving some challenges requires additional effort, longer-term solutions, and the support of others involved in providing ARV treatment. First, the Office has taken steps to improve U.S. coordination and acknowledged the need to collaborate with others, but it is too soon to tell whether these efforts will be effective. Second, to address policy constraints, U.S. agencies are working to enhance contracting capacity in the field and resolve differences on procurement, foreign taxation of U.S. assistance, and auditing of non-U.S. grantees. However, the Office's guidance did not address key issues related to the use of PEPFAR funds to buy certain ARV drugs. Third, the Office has proposed short-term solutions to the health worker shortage, such as using U.S. and other international volunteers for training and technical assistance; however, agency field officials said that using such volunteers is not cost effective. The Office is discussing with other donors certain longer-term interventions. Fourth, the Office has taken steps to encourage host countries' commitment to fight HIV/AIDS, but it is not addressing systemic challenges outside its authority, such as poor delineation of roles among government bodies. Finally, the Office is taking steps to improve data collection and reporting and better manage drug supplies.
The Postal Reorganization Act provided USPS with authority to offer nonpostal services. USPS used this authority to introduce a wide array of products, such as prepaid phone cards; an electronic funds transfer service between the United States and Mexico (Dinero Seguro); and retail merchandise, such as T-shirts, mugs, and neckties. In 2006, PAEA eliminated USPS’s authority to offer nonpostal services unless they were offered as of January 1, 2006, and expressly grandfathered by the PRC. PAEA directed the PRC to review USPS’s ongoing nonpostal services within 2 years of enactment to determine whether they should continue (i.e., be grandfathered), taking into account the public’s need for the service and the private sector’s ability to meet that need. The PRC defined, for the purposes of its review, a service to be an ongoing commercial activity that USPS offered to the public for financial gain. Using this definition, all nonpostal services that were offered as of January 1, 2006, were ultimately grandfathered by the PRC. With PRC approval, USPS can introduce new, related nonpostal products if they fall under the umbrella of one of its grandfathered nonpostal services. For example, because the PRC grandfathered Officially Licensed Retail Products as a nonpostal service, USPS can offer a variety of related products, including stamp dispensers and framed postal artwork, as long as they are consistent with the underlying grandfathered nonpostal service. While USPS offers a number of nonpostal products and services, the revenue they generate is relatively small. Nonpostal revenue in fiscal year 2011 was $173 million, accounting for a small fraction of a percent of USPS’s total revenue of $65.7 billion. PAEA required that grandfathered nonpostal services be regulated as either market dominant, competitive, or experimental products.” Market dominant services include those products where USPS exercises sufficient market power that it can, among other things, effectively set the price of such product substantially above cost without risk of losing a significant level of business to other firms offering similar products. For example, first-class mail letters and sealed parcels are market-dominant products. Competitive services consist of all other products and are those in which USPS competes with the private sector.cover its costs for these services. PAEA also authorized USPS to conduct market tests of experimental postal products if the product being tested (1) is significantly different from all products offered by USPS within the prior 2-year period; (2) will not result in an undue market disruption, especially for small business concerns; and (3) is correctly classified as either a market dominant or competitive product. The PRC, as a matter of practice, considers whether these conditions have been met before USPS can begin a market test. PAEA authorized market tests for products that are not anticipated to exceed $10 million in annual revenue and authorized the PRC to exempt products from this threshold if the annual revenue will not (or is not expected to) exceed $50 million. Generally, a market test may not exceed 2 years. things, if there is a market for the product and, if so, to test various pricing scenarios. The time frame allows USPS to demonstrate a “nexus” to the mail and, consequently, that the product is a postal product. If USPS believes a market test demonstrates that a product is suitable, it may seek approval from the PRC to permanently offer it. USPS also may terminate a market test if, for example, the test did not meet USPS’s objectives. If requested by USPS, the PRC may extend the testing period for an additional year. 39 U.S.C. § 3641(d). including the activities to be performed by USPS and the terms of reimbursement, if applicable. One such agreement allows USPS to accept passport applications on behalf of the U.S. Department of State (State Department). Given declining mail volumes, customers’ changing needs and their use of the mail, and a desire to address these issues, USPS began engaging with internal and external stakeholders within the postal community to generate ideas for innovative products and services that could generate additional revenue. In the summer and fall of 2010, USPS held a series of discussions, including an innovation symposium with stakeholders including business leaders, partners, customers, suppliers, and academics, to gather insights on potential innovations and revenue opportunities. According to USPS, the outreach generated over 1,500 potential initiatives, which were evaluated and winnowed down based on a variety of factors, such as time to implement, return on investment, and strategic fit. According to USPS, its decisions to pursue certain initiatives have been guided by several recurring themes, including: making it easier and more convenient for businesses and consumers promoting mailing best practices, such as those that incorporate barcode technology into advertising campaigns; and innovating and leading in e-commerce delivery to meet the needs of the mobile generation. In addition to these recurring themes, each initiative is aligned to support one of USPS’s three primary marketing strategies. These strategies are: (1) First-Class Mail—slow the diversion of First-Class Mail by differentiating hard copy from digital mail and embracing new opportunities from digital and social media; (2) Marketing Mail—simplify the use of marketing mail for businesses and promote emerging technologies for marketing mail; and (3) Shipping Services—develop solutions to grow shipping services in the e-commerce market. USPS’s overall strategic direction is described annually in its Comprehensive Statement of Postal Operations and the impact of these strategies is described in its annual Integrated Financial Plan—both of which are publically available. While the 2013 Integrated Financial Planindicates that USPS expects an operating loss of $2 billion in 2013, it projects that USPS’s revenue-generating initiatives will help USPS increase its shipping revenue by 10 percent and reduce the decline in its mailing revenue to 3 percent in fiscal year 2013. USPS plans to complete a multiyear revenue plan in the spring of 2013 that will include details on its competitive strategies. We have previously reported that making progress toward financial viability would require USPS to take steps to cut costs and increase revenue within its current legal authority. in postal reform legislation passed in the Senate that would have provided USPS with the authority to undertake new revenue-generating initiatives such as offering new nonpostal services and exploring opportunities to offer services on behalf of state and local governments. We also previously identified questions we believe are important to consider before USPS is granted such additional authority,following: How would USPS finance its nonpostal activities, considering its difficult financial condition? Should USPS be allowed to compete in areas where there are already private-sector providers, and if so, on what terms? GAO, U.S. Postal Service: Strategies and Options to Facilitate Progress toward Financial Viability, GAO-10-455, (Washington, D.C.; Apr. 12, 2010). At the time PAEA was enacted, USPS was offering 12 nonpostal services that it has continued to offer as a result of these services being grandfathered by the PRC. Two of these are market dominant services, including MoverSource (a service whereby the cost of changing a customer’s mailing address is defrayed through the sale of advertising) and Philatelic Sales (sales related to the hobby of stamp collecting). The remaining 10 are competitive services, including USPS’s Passport Photo Service. In fiscal year 2011, USPS reported to the PRC that it generated net income totaling $141 million from its 12 nonpostal services and related products. Appendix I provides additional information about each of these grandfathered nonpostal services. Since the enactment of PAEA, USPS has received approval from the PRC to market test a total of eight experimental postal products. Four of the eight approved market tests are for market dominant products, such as First Class Tracer, which enables the tracking of First-Class letters through the mail stream. The remaining four experimental products are competitive products, including USPS’s Gift Cards—experimental product to test the feasibility of selling gift cards loaded with a specified sum of money to consumers who could choose to mail the cards. Four of these experimental products are an outgrowth of USPS’s stakeholder discussions on new revenue-generating initiatives and are discussed further below. USPS terminated one of the eight approved market tests—Collaborative Logistics—a test to determine whether there was a market for renting excess space on USPS’s long-haul delivery trucks. While USPS considered the market test successful, it did not seek to make Collaborative Logistics a permanent product. Instead, USPS terminated the test in September 2011, indicating that its ongoing facility consolidations had resulted in significant opportunities to reduce overcapacity and costs within its transportation network. Depending on its future financial and operational conditions, USPS said it may seek to introduce this product on a permanent basis in the future. Appendix II provides more information on each of the experimental postal products for which USPS has received PRC approval for market testing since the enactment of PAEA. USPS has the authority to perform services for other federal agencies on a discretionary basis by entering into written agreements with these agencies. Because any postal officer may enter into such agreements, and USPS does not have a centralized office responsible for executing or otherwise tracking these agreements, USPS was unable to provide comprehensive information on the agreements it currently has. However, based on our discussions with USPS officials and our review of related reports, we identified at least four services that USPS performs on a discretionary basis for federal agencies. For example, USPS collects data on vacant addresses for the Department of Housing and Urban Development to use in forecasting neighborhood changes, assessing neighborhood needs, and measuring the performance of its housing- related programs. USPS also accepts passport applications at over 6,300 post offices nationwide for the State Department. By using USPS’s extensive network of post offices, the agencies’ agreement is intended to provide passport applicants with more convenient access to passport acceptance services than the State Department could provide alone, particularly in remote U.S. areas. In addition, USPS leases its excess space, including parking, office space, and roof areas to federal agencies, including the General Services Administration. Finally, USPS processes investigations related to equal employment opportunity complaints for numerous federal agencies.these services. As of November 2012, USPS was pursuing 55 stakeholder-identified initiatives that it believes respond to the changing needs of consumers and businesses and strengthen the relevance of USPS and the mail. Forty-eight initiatives are extensions of USPS’s existing lines of postal products and services; three are extensions of grandfathered nonpostal services; and four are experimental postal products. Forty-five of the initiatives are ongoing, and are discussed below and in more detail in appendix IV. Ten are under development and are not discussed. Thirty-nine of the 48 initiatives that are extensions of USPS’s existing lines of postal products and services are ongoing. Examples include: The reclassification of 6,000 of its post office boxes from market dominant to competitive in markets with other mail services in fiscal year 2011. This change allowed USPS to offer post office box customers additional services such as expanded lobby hours, earlier pick-up times, and the use of post office locations for street-styled addressing (e.g., 131 South Center St. #3094, instead of P.O. Box 3094) for an increased fee. The launch in fiscal year 2012 of the Partner Campaigns—an initiative in which USPS works with retail partners to increase the public’s awareness that retail stores also offer postal services. According to USPS, the Partner Campaigns are expected to increase customer access to postal products and services. The launch of 2nd Ounce Free in fiscal year 2012—an initiative that enables large mailers, such as banks, to send twice as much mail for the 1-ounce price. According to USPS, this product allows mailers to use the second ounce for marketing purposes which adds value to their mailings. For example, a mailer may choose to combine advertising with its bills and statements. If that advertising leads to the consumer using return mail, it could lead to greater sales for USPS and the mailer and slow the decline in the use of First-Class Mail. USPS is pursuing three stakeholder-identified initiatives that are extensions of its existing nonpostal services. Two of these initiatives are ongoing, including an effort to allow customers to change their addresses using mobile devices, which expands the ways a customer can request address changes through MoverSource. USPS is also pursuing four stakeholder-identified initiatives involving experimental postal products. Two that have completed market testing and are ongoing as permanent postal products are Every Door Direct Mail™, which is designed to make it easier for small and medium-sized businesses to advertise using the mail, and Reinvigorate Samples, which is intended to encourage increased mailings of product samples. Market tests for the two other experimental postal products—prepaid postage on the sale of greeting card and a campaign to involve selected companies in direct mail advertising—are under way. USPS pursued these 55 initiatives for reasons including revenue generation and adding value to the mail, among others, as shown in figure 1. Twenty-four initiatives (44 percent) were pursued solely or primarily to grow revenue, including a First-Class Package Service for packages weighing less than 1 pound—the only per-ounce pricing in the marketplace, according to USPS officials. To grow revenue, USPS also introduced several options for returning unwanted merchandise more easily and conveniently. USPS officials could not provide estimates of the expected net income for all of the 55 initiatives; however, the officials estimated that 9 of them will collectively generate a net contribution of about $240 million in fiscal year 2012. USPS decided not to pursue 25 additional stakeholder-identified initiatives because of financial and other reasons. Specifically, 12 of 25 initiatives, or 48 percent, were abandoned because of financial reasons. For example, USPS decided not to offer online Identity Management Services to educate customers about how to protect themselves from identity theft when shopping online. USPS anticipated that the service also would provide convenient access for purchasing from online companies pre-qualified by USPS. According to USPS, it abandoned this initiative because it determined that the initiative would not result in a return on USPS’s investment. In other cases, USPS determined that initiatives required too high an initial investment or posed other financial risks and uncertainties. For example, USPS decided not to pursue a domestic money-transfer service and a retail bill-payment initiative because the costs would render these financial services unprofitable at this time. USPS decided not to pursue the remaining 13 stakeholder-identified initiatives for other reasons. USPS eliminated four initiatives that could be viewed as nonpostal services because it lacks statutory authority to perform new services in this area, as discussed further below. USPS eliminated four initiatives because stakeholders were not interested in participating in the efforts. For example, USPS wanted to expand and digitalize its passport services at self-service kiosks in post offices as a potential means to reduce error and fraud. However, according to a recent USPS Office of Inspector General report, the State Department was not interested in participating because, in its view, the kiosks would not be cost effective. USPS also tried to partner with the Internal Revenue Service to use personnel at post office locations to verify the identities of individuals claiming eligibility for the Earned Income Tax Credit (a benefit for certain individuals who work and have lower wages). According to USPS officials, there was limited interest on the part of the Internal Revenue Service because, as reported by the Office of Inspector General, the Internal Revenue Service decided to implement an alternative solution. USPS eliminated two initiatives because it determined that they required a greater initial commitment of resources than other initiatives and were therefore of lower priority. For example, USPS decided not to provide eligible companies with volume-based discount pricing to increase their use of First-Class Mail because companies may be unwilling or unable to make the needed investments for the initiative to work. Instead, USPS officials said they are focusing on other incentives with lower investment thresholds. USPS eliminated two initiatives because it determined that they could potentially damage the trust that customers have in the USPS brand. One initiative involved opening and scanning a customer’s mail—with the customer’s permission—to send the mail to them in digital form. The other initiative envisioned providing companies with tools to enable them to better target their mailings to consumers. USPS eliminated one initiative that conflicted with ongoing efforts to reduce the postal workforce. Specifically, USPS decided not to use its Human Resource Shared Services Center to offer retirement management services to other federal agencies, in part, because it does not currently have sufficient capacity to expand these services. Figure 2 summarizes the primary reasons cited by USPS for not pursuing the 25 initiatives. According to USPS officials, they would like to pursue additional revenue- generating opportunities in three areas—nonpostal services; shipments of beer, wine, and spirits; and cooperation with state and local governments—if provided with additional statutory authority. The officials stated that USPS staff have briefed congressional members and staff about these areas.interests in publicly available reports, including its March 2010 Action In addition, they noted that USPS has outlined its Plan, which lays out the agency’s position on its need for increased flexibility to offer new and innovative products. USPS also reiterated its interest in acquiring additional statutory authorities in February 2012 when it issued its 5-year business plan. According to USPS, opportunities in these three areas could provide significant value to customers, improve USPS’s financial position, and take full advantage of its resources and competencies. USPS emphasized, however, that additional innovations in these areas will not be sufficient to solve USPS’s dire financial situation. Results will also be constrained by the economic climate and by changing use of the mail. USPS’s financial viability is dependent not only on cutting costs, but also generating additional revenues. USPS generated a net income of $141 million in 2011 from its offerings of nonpostal services and products. While beneficial, this income pales in comparison to USPS’s net loss of $15.9 billion in 2012. To address its deepening fiscal crisis, USPS believes that additional services and products in three areas—including nonpostal services, shipments of alcoholic beverages, and cooperation with state and local governments—could generate some additional revenue; however, additional statutory authority is needed. Legislation was introduced to improve USPS’s financial condition. The legislation included increased flexibilities to allow USPS to offer new products; however, Congress was unable to reach overall agreement on the steps needed to sustain and transform USPS. We continue to believe action to address the long-standing challenges that hinder USPS’s financial viability, including a consideration of options to expand its revenue- generating potential, remains necessary. We provided a draft of this report to USPS for review and comment. In its written comments, reprinted in appendix V, USPS stated that its financial viability is dependent not only on cutting costs but also generating additional revenue. USPS also provided technical comments which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. MoverSource is a program involving USPS’s change-of-address orders. The program is offered through a multiyear alliance between USPS and Imagitas—a subsidiary of Pitney Bowes—and provides USPS customers with, among other things, a form for requesting address changes; confirmation of the change; and mailings containing move-related tips, advertising, and offers. The revenue and cost of the program are shared between USPS and Imagitas. USPS also receives a share of the $1 fee collected for each internet change-of-address order. In addition, the mailings generate postage revenue for USPS that is not shared. According to USPS, the MoverSource program adds value to the change- of-address process, while defraying its annual costs for processing these changes. USPS typically processes about 40 million change-of-address orders annually, of which about one-third are processed online. USPS sells a wide-range of stamps and stamp-related items to customers involved in the hobby of stamp collecting. Examples of philatelic items include first day covers, ceremony programs, uncut press sheets, framed stamps, binders for storing stamps, and philatelic guides. USPS satisfies its orders for philatelic items through two fulfillment channels—retail outlets and USPS’s Stamp Fulfillment Services. Selected philatelic items are available at USPS retail facilities, but most are sold through USPS’s Stamp Fulfillment Services via USPS.com®, the USA Philatelic catalog, phone, and fax. In May 2012, the price of these items ranged from less than a dollar for the first day cover of the Common Tern bird stamp to $1,440 for a sheet of American Wigeon duck stamps. (0.7 million) In exchange for a portion of revenue generated, USPS provides its website customers with access to other companies (affiliates) that also provide mailing services and products. USPS’s website currently hosts hyperlinks to three affiliates—Cardstore, Click2Mail, and PremiumPostcard—under agreements styled as affiliate marketing arrangements. According to USPS, affiliate arrangements provide customers with increased access to, among other things, postal products and services and generate revenue for USPS through affiliation fees, increased website traffic, and additional postage sales. According to USPS, its affiliate website agreements are actively managed by both parties. As a result, the parties meet monthly and discuss marketing goals and objectives to increase the parties’ revenue and website visits. This program is similar to the one described immediately above, except that USPS does not actively manage these affiliate arrangements. USPS’s website currently hosts hyperlinks to three such affiliates—Start Sampling, My Savings, and Maponics—which provide customers with products and services ranging from free samples and coupons to maps. According to the Postmaster General, USPS’s agreements with these affiliates can be viewed as leases of USPS’s “virtual real estate.” “Linking only” affiliates pay either a flat fee (i.e., “cost per click” as in the case of Start Sampling and My Savings) or, in the case of Maponics, share a portion of the revenue with USPS for having a link to its website. Grandfathered nonpostal service (first year offered) Electronic Postmark® (1995) Description Electronic Postmark is a web-based security program that enables customers, using authorized service providers under license with USPS, to detect whether documents or files time stamped with an electronic postmark have been altered since the postmark was applied. Authorized providers generate the “timestamp” (the postmark) on secure servers that they own and maintain. Providers pay a quarterly fee for this service, with additional fees for usage above a certain threshold. In 2010—the last year in which USPS sold this service to a provider—the quarterly fee was $75,000 per provider for up to 1.5 million electronic postmarks and $0.02 for each additional postmark. This nonpostal service involves a non-exclusive lease agreement between USPS and FedEx, which for a fee, allows FedEx to place its drop boxes at USPS post offices. In September 2012, there were about 4,870 of these boxes located at USPS retail facilities. According to USPS, all responsibilities related to the installation, maintenance, collection, and removal of the drop boxes reside with FedEx. This program is similar to No. 9 below, except that it involves the sale of postal branded merchandise through other (nonpostal) channels. In fiscal year 2012, USPS had about 20 licensing agreements with third-party vendors for their use of USPS’s intellectual property on thousands of products including apparel, fashion accessories, cards and stationery, pet products, toys and games, and other types of consumer products. USPS charges a nominal fee ($25) for the use of its trademarks or copyrighted material for noncommercial or limited commercial purposes. However, it negotiates the price for its licensees’ use of patents, and other intellectual property involving more extensive commercial purposes. According to USPS, it manages its licensed products by, among other things, requiring licensees to obtain advance approval for USPS-branded products, imposing quality control standards, and policing product use and promotion. USPS has a variety of agreements with private, nongovernmental entities to lease excess USPS space, including parking, office, warehouse space, and land. For example, USPS leases its rooftop space and excess land to advertisers and cellular providers for billboards and antenna towers. In fiscal year 2012, USPS had about 600 lease agreements with nongovernmental entities. The parties negotiate the price of the lease, which vary by agreement. USPS licenses its intellectual property, including its postal trademarks and stamp images, to parties who use these images on merchandise that USPS sells in its post offices and on its website. (The program involves the actual sale of the licensed retail merchandise—not the royalty payments that USPS receives for licensing its intellectual property to these parties.) In fiscal year 2012, USPS had 150 licensing agreements with third-party vendors for their use of USPS branded items such as weight scales, stamp dispensers, teddy bears, passport holders, framed artwork, and key chains. In September 2012, prices for these products ranged from $1.30 for reflective mailbox stickers to $134.99 for framed art. USPS produces and sells passport photos to individuals for a fee of $15 at more than 5,000 USPS passport application acceptance sites. Description USPS offers self-service photocopying services at about 2,540 of its retail facilities. In some locations, photocopiers were installed by commercial vendors under contracts with USPS. The remaining machines were installed by USPS. In September 2012, USPS charged a fee of between $0.10 to $0.50 per page for copying. USPS rents excess facility space and services at its two training facilities to outside parties. The facilities include meeting rooms, housing, and exercise areas. Available services include (1) conference-related services such as food and the rental of audio-visual equipment, and (2) hospitality- related services such as lodging, fitness-related services, banquet services, and the onsite sale of sundries. USPS contracts with a company to manage each of the facilities. Management companies set the price for the services provided and USPS receives a negotiated percentage of net profits. Subtotal: Nonpostal competitive services. Total: All nonpostal services. Alternate Postage removes a step in the mailing process because it enables consumers to send greeting cards without affixing postage. According to USPS, the alternate payment method makes it easier and more convenient for customers to purchase and mail cards because they do not have to determine and affix the proper postage before mailing. Instead, the cost of postage is included (prepaid) in the price consumers pay for the card. To participate, a company must produce and distribute greeting cards with envelopes bearing barcodes and other USPS specified markings that allow USPS to identify each envelope in the mail stream. Participating companies pay USPS in two steps. First, USPS receives 50 percent of the postage revenue when the card is sold to consumers (or distributed to third- party vendors)—even if the card is not mailed. Second, companies must pay USPS for the postage for cards that USPS determines were actually mailed. Every Door Direct Mail–Retail was a test that used simplified addressing, entry, pricing and payment options to entice small and medium- sized businesses to advertise to consumers within their neighborhoods. By making advertising through the mail (direct mail) less costly and complex, USPS hoped to attract new mail customers and/or increase its mail volume and revenues from existing small- volume business mailers. The test had several features. For example, USPS allowed small and medium-sized businesses to send up to 5,000 mail pieces per day at a reduced cost (14.2 cents per piece). USPS also reduced the complexity of the mailings by allowing the businesses to prepare their mail using simplified addressing, such as “Postal Customer,” which eliminated the need to purchase mailing lists and print specific addresses on each of their mail pieces. In addition, USPS made it more convenient for these businesses to enter mail into the mailstream because it allowed them to bring their mailings to their local post office instead of to a mail-processing plant that may not be conveniently located. Finally, USPS waived all permitting and mailing fees for the businesses’ mailings. Revenue: According to USPS, revenue for this product has continued to grow and, as of October 2012, surpassed $2 million per week at its retail locations. USPS estimates this product will result in $55 million in revenues in fiscal year 2012 and $100 million in revenue in fiscal year 2013. Name and anticipated timeframe for market test, at approval Mail Works Guarantee (~June 2011 to ~May 2013) Description This test was aimed at demonstrating the effectiveness of direct mail advertising campaigns to 16 of the country’s “top” advertising companies. According to USPS, these companies spend $90 billion annually for advertising on all media, but spend approximately $3 billion (about 3 percent) advertising through the mail. Consequently, in requesting the PRC’s approval for the market test, USPS stated that it saw “huge potential revenue” if USPS could involve these companies in direct mail advertising campaigns using First-Class and Standard Mail. To qualify for the test, a company must spend at least $250 million on advertising annually, but allocate less than 1 percent of its total advertising budget to direct mail. In addition, it must mail 500,000 to 1 million pieces of First-Class Mail or Standard Mail during a direct mail campaign. Participating companies would be required to pay normal postal rates for their mailings, but receive a postage back guarantee (i.e., a credit to their USPS account of up to $250,000) if a direct mail advertising campaign fails to achieve its goal. ≤10 million Ongoing, but unlikely to be successful. As of November 30, 2012, USPS had not yet found any companies interested in participating in the market test. According to USPS, the test is not likely to be successful because its incentives for program participation “were not sufficient to cause customers to change their behavior.” While the test has not been successful thus far, USPS said it is continuing to solicit participation from large companies with multi-million dollar advertising budgets that do not currently use the mail as part of their advertising mix. In addition, USPS is exploring revisions to the test’s participation requirements to help attract customers. On January 8, 2013, USPS filed to terminate the Mail Works Guarantee market test with the PRC. Revenue: None. This test product is designed to satisfy the needs of consumers who want to track their First-Class letters through the mail stream. Specifically, the product allows customers to purchase and affix a barcode label, tracing number, and a code which, collectively, enable them to track the status (excluding delivery confirmation) of their First-Class mailings on USPS’s website. According to USPS’s request for the test’s approval, USPS expected to sell the tracing labels in packages of 5 and 10 and to sell the packages at differing prices—ranging from $0.99 to $2.99 depending on the quantity and location—to test customer acceptance at 50 retail postal locations in the Washington, D.C., metropolitan area. ≤10 million Ongoing. According to USPS, while this test was not expected to conclude until November 2013, the First-Class Tracer had sold out at most of its participating retail locations as of mid-October 2012. Revenue: According to USPS, this product generated $3,065 in revenue though mid- October 2012. Collaborative Logistics—the first experimental market test offered under PAEA—was initiated to reduce excess capacity in USPS’s highway transportation network. According to USPS, its utilization of vehicles across this network varied by day of week, time of month, and season. As a result, USPS sought to optimize its network of purchased highway transportation by reselling available space on its contractors’ trucks. During the course of the test, USPS resold, on a space-available basis, truck space to six customers for shipments that were required to be loaded on pallets. The loads were delivered to large postal processing facilities and picked up at large downstream postal facilities. USPS negotiated the price with each customer based on a variety of factors, including the customers’ required service standards ≤10 million Terminated in September 2011. USPS views this market test as successful, but did not seek to make it a permanent postal product despite its original intention to do so. Instead, USPS terminated the test in September 2011, indicating that it had reevaluated its plans. USPS explained that its ongoing facility consolidations had resulted in significant opportunities to reduce overcapacity and costs within its transportation network. Depending on its future financial and operational conditions, USPS said it may seek to introduce this product on a permanent basis sometime in the future. Revenue: According to USPS, this product generated about $2.1 million over the testing time frame. Samples Co-Op Box was a 1-week test mailing of parcel boxes containing an assortment of samples, such as beauty, health, and snack products, from various packaged goods companies. USPS partnered with a company to prepare and mail the boxes and to conduct market research, and USPS delivered the boxes to consumers in targeted demographic areas without charge. According to USPS, while product samples provide an effective way for companies to build brand awareness for their products, the volume of samples sent through the mail has declined in recent years. Consequently, this test was intended to, among other things, explore ways to increase sample mailings that, according to USPS, offered “the potential for millions of dollars” of additional postal revenue. Another purpose of the test was to obtain information about the cost-effectiveness of the program and its value to product manufacturers and consumers. Name and anticipated timeframe for market test, at approval Gift Cards (~May 1, 2011 to ~April 30, 2013) Description This market test allows customers to purchase gift cards loaded with a specified sum of money that may be sent through the mail. According to USPS, the sale of gift cards at selected retail postal locations increases customer convenience and enhances postal revenue by encouraging customers to use the mail for gifting purposes. USPS activates the card and earns a negotiated percentage of each card’s activation fee. The remainder of the fee is remitted to the issuer of the card. In its request for approval of the market test, USPS indicated that it intended to enter into an agreement with one or more card providers. USPS also indicated that it might test different activation fees, which typically range from $3.95 to $5.95 depending on the value and type of the gift card. ≤10 million Ongoing. USPS entered into an agreement with American Express and currently offers this vendor’s gift cards at about 5,200 USPS retail locations. The test is expected to be completed by May 2013. Revenue: According to USPS, this product generated $96,000 in revenue during fiscal year 2011 and is expected to generate about $1.1 million during fiscal year 2012. Metro Post is a same-day delivery service for local buyers who purchase products from participating online e-commerce companies (and associated retailers) in selected metropolitan areas. According to USPS, it anticipates entering into relationships with up to 10 online e-commerce companies over the course of the market test. Metro Post is intended to test the operational feasibility of same-day package delivery to/from multiple locations and to help USPS determine the optimal pricing structure for the service. Ongoing. PRC approved the market test for Metro Post on November 14, 2012. USPS selected San Francisco as the first city for testing. The market test began on December 17, 2012. Revenue: Not yet available. Description Pursuant to an interagency agreement between USPS and the State Department and authorities granted under 39 U.S.C. § 411 and 31 U.S.C. § 686. USPS is authorized to accept passport applications on behalf of the Department. Under the terms of the most recent agreement (2000), trained USPS personnel, among other things, review each passport application for completeness, record pertinent information about the identification the applicant used, collect fees, and send the completed application to the State Department. USPS retains a portion ($25) of the collected fees in the form of an execution fee. In fiscal year 2011, USPS accepted approximately 5.6 million passport applications in over 6,300 post offices nationwide. According to USPS, the applications generated net income of $43 million for the period. By using USPS’s extensive network of post offices, the agreement is intended to provide passport applicants with more convenient access to passport acceptance services than the State Department could provide alone, particularly in remote U.S. areas. According to the interagency agreement, either party may terminate the agreement, without liability, with 180 days advance notification to the other party. Through various interagency agreements and under authority granted by 39 U.S.C. § 411, in 2006, USPS began conducting EEO investigations on behalf of other federal agencies. According to USPS officials, USPS initiated this service to generate additional revenue. In addition, USPS believed that given its extensive experience in this area, it could use its existing contractor workforce to reduce the costs and improve the timeliness and quality of other agencies’ EEO complaints processing. Since 2006, 17 federal agencies have entered into interagency agreements with USPS for this purpose. In fiscal year 2011, USPS’s National EEO Investigative Service Office completed 255 investigations for 9 federal agencies that generated about $558,000 in net income. According to USPS, interagency agreements can be terminated at any time by the mutual agreement of the parties. Pursuant to authorities granted under 39 U.S.C. § 411 and a 4-year interagency agreement with HUD, USPS agreed to provide HUD with data on vacant U.S. residential and business addresses. According to the agencies’ most recent agreement (September 2011), HUD intends to use this information to forecast neighborhood changes, assess neighborhood needs, and measure the performance of several of its programs. HUD agreed to pay USPS $30,000 in 2011 and has options to purchase additional data over a 4-year period for a total cost of up to $160,000. The parties specified that the agreement may be terminated for no reason or any reason by written notice to the other party. In addition to services that USPS performs on a discretionary basis for federal agencies, USPS is required—either by law or executive order—to perform numerous services on behalf of federal agencies. For example, the Migratory Bird Hunting and Conservation Stamp Act requires USPS to print, issue, and sell duck stamps (that serve as hunting licenses) for the U.S. Fish and Wildlife Service. Pub. L. No. 73- 124, 48 Stat. 451 (Mar. 16, 1934). Through executive order, USPS also is directed to deliver medicines to individuals in the event of a large scale biological attack. Exec. Order No. 13527, 75 Fed. Reg. 737 (Jan. 6, 2010) 2nd Ounce Free enables large mailers, such as banks, to send twice as much mail for the one-ounce price. According to USPS, this product allows mailers to use the second ounce for operational or marketing purposes which increases the potential value of their mailings. Alternate Postage Payment removes a step in the mailing process because it enables consumers to send greeting cards without affixing postage. According to USPS, the alternative payment method makes it easier and more convenient for customers to purchase and mail cards because they do not have to determine and affix the proper postage before mailing. Instead, the cost of postage is included (prepaid) in the price consumers pay for the card. USPS introduced this product as an experimental product in January 2011 and, according to USPS, expects to seek its approval as a permanent product at a later date. Appendix II provides additional information on this product initiative. USPS is pursuing a pricing incentive that will offer customers a higher discount for sending First-Class Mail that meets its Intelligent Mail barcode (IMb) requirements. According to USPS, this initiative supports USPS’s goal of increasing adoption of its intelligent barcodes that, among other things, enhances USPS’s mail-processing efficiency and increases its ability to track the location of mail as it moves through the mail stream. This initiative targets USPS’s largest mailers of First-Class Mail and Standard Mail and is carried out through negotiated service agreements between the parties. Participating mailers will receive reduced postage costs if they meet specified volume commitments. According to USPS, the initiative may be extended to other product categories if it is successful. This initiative allows mailers to enhance the “indicia” area (i.e., the postage block area) of a mail piece with a picture or image, including mobile barcodes. According to USPS, the images can be used to advertise brands, bring immediate attention to the mail piece, and potentially improve the rate in which mail is opened. This initiative represents USPS’s ongoing process to analyze its First-Class Mail product and service offerings. The analysis is intended to result in, among other things, the continuous simplification of mail offerings, the optimization of USPS’s First-Class Mail products in the marketplace, and the elimination of product redundancies. In addition, USPS hopes to reduce the cost and complexity of its customers’ mailings and identify gaps between product features and customer needs. According to USPS, the introduction of its 2nd Ounce Free initiative was a direct result of this effort. This initiative is intended to increase USPS’s sales of stamps and philatelic products. Specifically, USPS said it intended to renew its emphasis on properly marketing and displaying stamps and philatelic products in its retail locations and to expand its Forever stamp pricing to its collectible stamp program. This promotion ran between November 7-21, 2012, and offered business mailers an upfront 2-percent postage discount on qualifying mailings. To qualify, the mailings had to include a two-dimensional barcode that when read or scanned by a mobile device, linked the recipient to a website for purchasing the advertised product. According to USPS, by delivering mobile-optimized promotional offers, coupons, and catalogs to customers in time for the Black Friday-Cyber Monday shopping sprees, the initiative was intended to spur mobile purchasing. Description This initiative involves a series of short-term promotions designed to provide mailers and marketers with incentives to promote innovative uses of the mail that increase its value to recipients. In addition, according to USPS, the promotions are intended to accelerate best practices among marketers and business mailers, while increasing awareness and engagement among consumers. This initiative involves activities that support overall USPS efforts, such as those related to simplifying USPS’s pricing, product lines, and mailing processes. For example, according to USPS, it is examining its existing discounts for the presorting of customer mailings and is establishing flexible pricing based on location and capacity in its processing and transportation networks. This initiative involved an advertising campaign to promote the value of advertising and First-Class Mail to business and residential customers. According to USPS, the campaign ran in the fall of 2011 and centered around two television ads—called “Hacked” and “Face-to-Face.” Direct Mail Hub is a USPS internet site designed to educate small business owners, including many first-time business users, on how to use direct mail to advertise their businesses to consumers. According to USPS, the site helps small businesses generate mailings from start-to-finish and complements its Every Door Direct Mail product. This initiative is intended to respond to the decline in the circulation of newspapers, which advertisers often use to deliver inserts, such as circulars, to customers. Instead of being inserted in a newspaper, advertisers can—for example—bundle multiple advertising leaflets in mailings to consumers. The initiative is carried out through negotiated service agreements and, as of November 2012, USPS had negotiated one such agreement. This initiative targets large advertising companies that rely on other media—not mail—to advertise to prospective customers. Specifically, USPS hopes to demonstrate the effectiveness of direct mail as an advertising medium. Appendix II provides additional information on this initiative, which was introduced as an experimental product in June 2011. On January 8, 2013, USPS filed to terminate the Mail Works Guarantee market test with the PRC. This initiative offers business mailers a 2 percent postage discount on their qualified mailings. To qualify, the mailing must contain a two-dimensional barcode or print/mobile technology that can be read or scanned by a mobile device. According to USPS, the initiative is designed to encourage marketers to incorporate a “Mobile on Mail” multi- channel marketing approach that uses mail as the gateway for driving e-commerce activity and transactions. This initiative involves a potential enhancement to MoverSource—a program for changing a mail recipient’s address. With this initiative, USPS is exploring the feasibility of allowing mail recipients to use their mobile devices as another vehicle for changing their mailing address. (USPS currently allows mail recipients to change their addresses via a printed form, by phone, or online.) Appendix I provides additional information about MoverSource. This initiative involves an ongoing USPS effort to develop mutually beneficial partnerships with companies in the periodical industry. According to USPS, as of November 2012, one topic under discussion involved potentially offering magazine subscriptions through USPS channels. Reinvigorate Samples is an effort aimed at increasing the use of mail to deliver product samples. This initiative is a follow-on to Sample-Co-op boxes that USPS initiated as an experimental product. Appendix II provides additional information about Sample Co-op boxes. Description USPS introduced Every Door Direct Mail–Retail as an experimental postal product that used simplified addressing, entry, pricing and payment options to entice small and medium-sized businesses to advertise to consumers within the businesses’ neighborhoods. By making advertising through the mail (direct mail) less costly and complex, USPS hopes to attract new mail customers and/or increase its mail volume and revenues from existing small-volume business mailers. The PRC approved Every Door Direct—Retail as a permanent postal product in September 2012. Appendix II provides additional information about this product. This initiative is intended to make it easier for technology-savvy and mobile customers to, among other things, order packing supplies, buy postage, print labels, and schedule free package pickups using USPS’s website and other digital channels, such as smart phone mobile applications. The increased use of digital channels is intended to provide customers with greater access to USPS’s products and services. ePostage is an initiative to provide mobile and technology-savvy customers with greater access to USPS’s products and services. As part of this effort, USPS has developed commercial shipping tools that, among other things, provide payment and tracking services for online retailers. According to USPS, for example, its e-Postage solution is currently used by one major online retailer. This initiative involves enhancements to USPS’s portfolio of products and services targeted toward small businesses. According to USPS, recent enhancements include Carrier Pickup and Priority Mail Prepaid Forever Flat Rate Packaging. USPS also has extended its flat-rate-shipping options with Priority Mail and Express Mail padded envelopes, and upgraded its online Click-N-Ship platform on USPS.com. This initiative involves the renegotiation of USPS’s Parcel Select contracts to adjust its pricing for deliveries based on geography. Specifically, USPS adopted a two-tiered pricing approach—one tier for suburban areas and another tier for rural areas—that is expected to lead to greater volume and revenue. According to USPS, for example, the contract changes enable private-sector firms to leverage USPS’s low costs and delivery access to provide “last-mile services” to their customers. This initiative involves a First-Class Package Service for packages weighing less than 1 pound. According to USPS, this service provides the only per-ounce pricing in the marketplace. This initiative is to better align USPS pricing with market demand and USPS costs. The Postal Accountability and Enhancement Act of 2006 required USPS’s market-dominant products to cover their costs and any price increases for these products must be tied to the rate of inflation. For example, USPS moved Parcel Post®—previously an unprofitable market-dominant product—to its competitive product line. According to USPS, this change will enable USPS to raise its price for Parcel Post to at least cover the product’s costs. In addition, the change is expected to benefit USPS’s Priority Mail® products which, according to USPS, had been disadvantaged because these products were previously competing against the unprofitable Parcel Post service. This initiative is intended to help USPS compete for a larger share of the parcel shipment business. Specifically, USPS developed a marketing plan to communicate the benefits of its products, services, and initiatives through multi-channel marketing campaigns. According to USPS, one feature of the plan involves reminding customers about the benefits and features of existing shipping options which it hopes will “continue to position” USPS as the “shipper of choice” for businesses. Description USPS plans to give shippers greater control of their packages by, among other things, enabling them to request that a mail piece be returned or redirected before final delivery to the recipient. For example, Hold For Pickup permits a package to be shipped directly to a Post Office and held until picked up. Other ancillary services include Adult Signature Required and Adult Signature Restricted Delivery, which are designed to satisfy security and privacy concerns for shipments that by law require an adult’s signature. This initiative provides customers with various self-service returns options, such as the ability to print shipping labels that make it easier and more convenient to return merchandise through the mail. According to USPS, the introduction of these “return-on- your-own” options is intended to make USPS the preferred channel for business-to- consumer returns. This initiative involves an expansion of USPS’s offerings of free packaging supplies to increase customer convenience and support growth of its expedited shipping products. The initiative involves the introduction or improvement of various processes USPS uses to accept business mail for processing, including processes that verify and track the mailings. In addition, USPS intends to upgrade its infrastructure to allow electronic data on mailings to be stored at designated local sites (in addition to the centralized database) to address the possibility of network outages. Increase access, grow revenue 31 Business Mail Entry This initiative involves the introduction of online resources and streamlined processes intended to make it easier for mailers to prepare and enter their mail into the mail stream. This initiative involves the 400 field employees who provide day-to-day assistance to USPS’s 27,000 commercial customers throughout the country. According to USPS, in addition to customer service, these employees also establish relationships with its customers’ key decision makers. USPS is exploring new tiered pricing structures that would allow customers to earn additional mailing discounts once they exceed specified revenue or volume thresholds. USPS plans to in-source its customer call-in center in an effort to improve customer service and increase the likelihood of customers purchasing postal products. According to USPS, this is a high priority initiative which is required under the terms of its latest contract with the American Postal Workers’ Union. USPS said the call center will begin operations in the fall of 2012 and ramp up its operations through the spring of 2013 to meet its contractual commitment to have 1,100 union employees assigned to call center operations by May 2013. USPS is developing and expanding its line of retail products, such as greeting cards, promotional and mail related products, and tote bags. All of these products are aimed at extending postal branding in the marketplace, cross promote core postal products, and improve the overall shopping experience. This initiative targets small business customers that, according to USPS, are looking for increased convenience from their post office boxes. Specifically, USPS reclassified 6,000 of its post office boxes from market dominant to competitive products in markets with other mail services in fiscal year 2011. This change allowed USPS to offer post-office-box customers additional services such as expanded lobby hours, earlier pick-up times, and the use of post office locations for street-styled addressing (e.g., 131 South Center St. #3094, instead of P.O. Box 3094) for an increased fee. Description This initiative was designed as a communications campaign built around the theme that USPS is “Everywhere I Am” with “I” being the customer. USPS is pursuing this initiative as an alternative access communications effort. According to USPS, this theme is extremely important to customers who are mobile and tech savvy, and therefore, not afraid to use remote postal locations to do business. USPS is pursuing licensing opportunities, particularly opportunities that will promote its stamp program. According to USPS, its licensing efforts have two goals. First, by controlling the use of its postal trademarks, USPS said it protects its brand and ensures that its good name is not harmed by indiscriminate use of its trademarks. Second, USPS said it has a very valuable collection of American art as part of its stamp image library and that private sector organizations want to leverage the collection for their endeavors. According to USPS, licensing this art helps defray its costs and helps support its stamp program. Appendix I provides additional information about USPS’s licensing activities. 39 Partner Campaigns This initiative involves working with retail partners to increase awareness of the retailers’ stores as locations where customers can obtain postal services. This initiative focuses on improving customer service at USPS’s retail facilities. For example, USPS plans to test different approaches and staffing options to improve customer experience and reduce customer wait time at its retail facilities. In rural America, USPS is adjusting service to local demand through its use of Village Post Offices and changes to its hours of facility operations. According to USPS, it plans (over the next 5-10 years) to shift the majority of its retail volume from USPS-owned facilities to retail partners and stamp partners via its Retail Partner Expansion Program. This program will establish regional and national contracts that will allow select retailers to provide USPS products and services to their customers. According to USPS, the program will increase customer access to USPS products and services more rapidly and at a lower cost than would be possible through existing channels. In addition, USPS believes that the program will increase its market share and create a substantial retail platform for introducing new USPS products and services in the future, among other benefits. This initiative seeks to substantially increase USPS’s revenue from political mail. Specifically, USPS hopes to generate $500 million in revenue in fiscal years 2012 and 2013 through a major sales and marketing effort. According to USPS, such an increase would double the revenue it received during the previous election season. This initiative is intended to increase customer access to self-service postal options which, according to USPS, will substantially lower its costs and improve its customers’ experience, among other benefits. Specifically, USPS plans to install self-service kiosks in about 2,500 high-traffic USPS facilities to emphasize self-service as the optimal channel for accessing postal services. This initiative involves realigning USPS’s sales staff to build relationships with its largest customers. USPS expects over $60 million in incremental revenue as a result of this realignment in 2012. As part of its effort to grow revenue, personnel at post offices are attempting to increase their sales of expedited packaging and extra services through a concerted sales effort. Lorelei St. James, (202) 512-2834 or stjamesl@gao.gov. In addition to the contact named above, Kathleen Turner and Heather Halliwell (Assistant Directors), Tonnye Conner-White, Delwen Jones, Steve Martinez, Josh Ormond, James Russell, and Crystal Wesco made key contributions to this report.
USPS continues to face a dire financial situation. Reducing costs is essential, but USPS also must generate additional revenue through the sale of products and services. PAEA, enacted in 2006, eliminated USPS's authority to offer nonpostal services unless they were offered as of January 1, 2006, and expressly grandfathered by the PRC. USPS may, however, offer new nonpostal services if they are related to the grandfathered nonpostal services. It may also offer experimental postal products that meet certain conditions. As requested, this report describes: (1) the nonpostal services grandfathered after the enactment of PAEA, experimental postal products offered since enactment of PAEA, and discretionary services USPS currently performs for other federal agencies; (2) initiatives--including nonpostal services and experimental postal products--USPS is pursuing to generate additional revenue and the status of these initiatives; and (3) the reasons USPS decided not to pursue other revenue-generating initiatives that it had identified. GAO reviewed PAEA provisions and PRC decisions pertaining to nonpostal services, experimental postal products, and services performed for other federal agencies and USPS documents related to the initiatives that USPS chose to pursue and those it decided not to pursue. GAO interviewed USPS officials regarding these issues. In commenting on a draft of this report, USPS agreed that its financial viability is dependent not only on cutting costs, but also on generating additional revenue. The U.S. Postal Service (USPS) currently offers 12 nonpostal services (i.e., services not directly related to mail delivery) that were grandfathered by the Postal Regulatory Commission (PRC) after enactment of the Postal Accountability and Enhancement Act (PAEA). These services--which include Passport Photo Services, the sale of advertising to support change-of-address processing, and others--generated a net income of $141 million in 2011. Since enactment of PAEA, USPS has received approval from PRC to offer eight experimental postal products, which are products that differ significantly from other offered products, such as the sale of gift cards loaded with a specified sum of money. Lastly, USPS performs at least four discretionary services (i.e., services it chooses, rather than is required, to perform) for other federal agencies, such as accepting passport applications for the State Department. USPS is currently pursuing 55 new initiatives that it identified based on outreach to postal stakeholders. USPS chose to pursue these initiatives because of their potential to increase revenue and add value to the mail, among other reasons. Forty-eight initiatives are extensions of existing lines of postal products and services, such as offering Post Office Box customers a suite of service enhancements (e.g., expanded lobby hours and earlier pickup times) at selected locations and increasing public awareness of the availability of postal services at retail stores. Three initiatives are extensions of existing nonpostal services, including allowing customers to forward their mail to a new address using mobile devices. Finally, four of the initiatives involve experimental postal products, such as prepaid postage on the sale of greeting cards. These four experimental products are among the total of eight experimental products that have received PRC approval since enactment of PAEA. Forty-five of the 55 initiatives are ongoing; the remaining are under development. USPS considered but decided not to pursue 25 other stakeholder-identified initiatives, primarily because of financial reasons. Twelve initiatives were abandoned because USPS determined they were not likely to be profitable or the initial investment was too high. Reasons for not pursuing other initiatives included insufficient stakeholder interest or lack of statutory authority. USPS would like to pursue revenue-generating opportunities in three areas--nonpostal services, shipments of alcoholic beverages, and services performed for state and local governments--if it is provided with statutory authority to do so. USPS officials said opportunities in these areas could improve USPS's financial position, but they emphasized that additional innovations will not be sufficient to return USPS to financial solvency. Results will also be constrained by the economic climate and by changing use of the mail. USPS's multiyear revenue plan detailing its competitive strategies is expected in the spring of 2013.
An influenza pandemic is caused by a novel strain of influenza virus for which there is little resistance and which therefore is highly transmissible among humans. Unlike incidents that are discretely bounded in space or time (e.g., most natural or man-made disasters), an influenza pandemic is not a singular event, but is likely to come in waves, each lasting weeks or months, and pass through communities of all sizes across the nation and the world simultaneously. While a pandemic will not directly damage physical infrastructure such as power lines or computer systems, it threatens the operation of critical systems by potentially removing the essential personnel needed to operate them from the workplace for weeks or months. On June 11, 2009, the World Health Organization (WHO) declared a pandemic based on the novel influenza A (H1N1) virus currently in wide circulation by raising the worldwide pandemic alert level to Phase 6—the highest level. Figure 1 shows the WHO phases of a pandemic, characterizing Phase 6 as community-level outbreaks in at least one country in a different WHO region in addition to the criteria defined in Phase 5. This action was a reflection of the spread of the new H1N1 virus, not the severity of illness caused by the virus. At that time, more than 70 countries had reported cases of 2009 H1N1 and there were ongoing community-level outbreaks in multiple parts of the world. As of November 8, 2009, WHO reported over 503,536 confirmed cases and at least 6,260 deaths, acknowledging, however, that the number of cases was actually understated since it is no longer requiring affected countries to count individual cases and confirm them through laboratory testing. Similar to the seasonal influenza, the 2009 H1N1 influenza can vary from mild to severe. Given ongoing H1N1 activity to date, the Centers for Disease Control and Prevention (CDC) stated that it anticipates that there will be more cases, more hospitalizations, and more deaths associated with this pandemic in the United States in the fall and winter. The novel H1N1 virus, in conjunction with regular seasonal influenza viruses, poses the potential to cause significant illness with associated hospitalizations and deaths during the U.S. influenza season. The United States continues to report the largest number of 2009 H1N1 cases of any country worldwide, although most people who have become ill have recovered without requiring medical treatment. The 2009 H1N1 influenza has been reported in all 50 states, the District of Columbia, Guam, American Samoa, the Commonwealth of the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. As shown in figure 2, the Strategy lays out three high-level goals to prepare for and respond to an influenza: (1) stop, slow, or otherwise limit the spread of a pandemic to the United States; (2) limit the domestic spread of a pandemic and mitigate disease, suffering, and death; and (3) sustain infrastructure and mitigate impact on the economy and the functioning of society. These goals are underpinned by three pillars that are intended to guide the federal government’s approach to a pandemic threat, including: (1) preparedness and communication, (2) surveillance and detection, and (3) response and containment. Each pillar describes domestic and international efforts, animal and human health efforts, and efforts that would need to be undertaken at all levels of government and in communities. The Plan outlines steps for federal entities and also provides expectations for nonfederal entities—including state, local, and tribal governments; the private sector; international partners; and individuals— to prepare themselves and their communities. Of the 324 action items in the Plan, 144 are related to pillar 1 on preparedness and communication; 86 are related to pillar 2 on surveillance and detection; and the remaining 94 are related to pillar 3 on response and containment. Nearly all of the action items (307 of 324) have a measure of performance, and most (287 of 324) of the action items have a time frame identified either in the action item’s description, measure of performance, or both. Most of the action items in the Plan—those that were not tied to response—were expected to have been completed in 3 years, by May 2009. Since the issuance of the Plan in May 2006, the HSC publicly reported on the status of the action items at 6 months, 1 year, and 2 years in December 2006, July 2007, and October 2008, respectively. Although this administration has not yet publicly reported on the 3-year status of implementing the Plan’s action items, an NSS official stated that the 3-year progress report had been in development prior to the 2009 H1N1 pandemic, and may be released shortly. The HSC monitors the status of action items in the Plan tasked to federal agencies by convening regular interagency meetings and requesting summaries of progress from agencies. According to a former HSC official who was involved with monitoring the Plan in the prior administration and officials from all of the six agencies, following the development of the Plan, the HSC officials convened interagency meetings at the Sub-Policy Coordination Committee level (deputy assistant secretary or his or her representative) that included discussions on the implementation of action items. The former HSC official stated that these meetings are a forum for monitoring the status of the Plan’s action items. These meetings were held weekly after the release of the Plan and biweekly after the spring of 2008, according to the former HSC official. Officials from several of the selected agencies stated that the interagency meetings facilitate interagency cooperation and coordination on the action items in the Plan. Officials also said that these meetings provide a venue to raise and address concerns relating to how to implement particular action items, and enable them to build relationships with their colleagues in other agencies. In addition, the HSC requested that agencies provide the Council with periodic summaries of their progress on the action items in preparation for the HSC’s progress reports, according to officials from all of the selected agencies. Officials from the six selected agencies informed us that, in this administration, the NSS continues to lead the interagency process used to monitor progress of the Plan. Officials from several of the selected agencies stated that the NSS continues to hold meetings at the Sub- Interagency Policy Committee level to monitor efforts related to influenza pandemic, with a primary focus on the 2009 H1N1 response. According to an NSS official, the NSS has also requested periodic summaries of progress from agencies on the action items. For action items that involve multiple federal agencies, the six agencies monitor the action items assigned to them by designating one or two agencies to report one consolidated summary of progress for each action item to the HSC, according to agency officials. Some action items task additional federal agencies with a support role as well. According to agency officials, all agencies tasked with responsibility for an action item have to approve its summary of progress before it is provided to the HSC. The HSC’s 6-month, 1-year, and 2-year progress reports state that the action items’ summaries in the reports were prepared by relevant agencies and departments. Officials from all six agencies said that the HSC does not always require them to submit supporting documentation along with their summary of progress to determine if an action item is complete. For instance, officials at three of the agencies said that the HSC does not require them to submit supporting documentation, while officials from two other agencies said that additional information is required by the HSC if it is not convinced about the completeness of an action item, or if it is unclear that the respective measure of performance was met based on the summary of progress. For the 112 action items in the Plan that include both federal agencies and nonfederal entities, the responsible federal agencies determined how they would work with and monitor the nonfederal entities. According to the former HSC official, the responsible agencies determined how these action items would be implemented, including deciding which nonfederal entities they would work with and in what manner they would work with them. Among the six agencies we reviewed, five said that they worked with nonfederal entities to implement some of the action items in the Plan. For example, DOT officials stated that they worked with professional associations to develop guidelines and recommendations for Emergency Medical Services (EMS) and 9-1-1 call centers, and HHS officials told us that they worked with medical experts to develop guidance on mass casualty care. We interviewed representatives of nine of these nonfederal entities and all of them confirmed that the status of the action items with which they were associated had been accurately reported. However, they also told us that they had not been asked for input into the summaries of progress for the action items with which they were associated and had therefore been unable to check the accuracy of the summaries before they were reported. The HSC makes the final determination as to whether the Plan’s action items are completed, according to the former HSC official and officials from the selected agencies. The HSC bases its determination on information from federal agencies, and uses the measure of performance associated with the action items as criteria for completion, as stated in the HSC’s 6-month, 1-year, and 2-year progress reports. Officials from three of the selected agencies stated that their agencies advise the HSC as to whether they believe an action item is complete when they provide summaries of progress to the HSC, while officials from two selected agencies stated that they provide summaries of progress to the HSC, and the HSC ultimately determines if an action item is complete. An interagency group led by the HSC reviews the agencies’ summaries of progress to help determine if action items are complete. The former HSC official told us that the HSC’s method of assessing whether an action item was complete depends on the specific action item. For some action items, the former HSC official stated that the summary of progress is reviewed by both an interagency group and a technical working group consisting of subject-matter experts. As we reported in August 2007, state and local jurisdictions that will play crucial roles in preparing for and responding to a pandemic were not directly involved in developing the Plan, even though it relies on these stakeholders’ efforts. Stakeholder involvement during the planning process is important to ensure that the federal government’s and nonfederal entities’ responsibilities are clearly understood and agreed upon. Moreover, the Plan states that in the event of an influenza pandemic, the distributed nature and sheer burden of disease across the nation would mean that the federal government’s support to any particular community is likely to be limited, with the primary response to a pandemic coming from state governments and local communities. In our June 2008 report on states’ influenza pandemic planning and exercising, officials from selected states and localities confirmed that they were not directly involved in developing the Plan. Further, HHS officials confirmed that the Plan was developed by the federal government without any state input. Although the Plan calls for actions to be carried out by states, local jurisdictions, and other entities, including the private sector, it gives no indication of how these actions will be monitored and how their completion will be ensured. While the HSC reported on progress on all of the action items involving both federal and nonfederal entities that are included in the 2-year progress report, the 17 action items that are intended for nonfederal entities are not monitored or reported on by the HSC or the six federal agencies we reviewed. According to the former HSC official in the prior administration and an NSS official in the current administration, the HSC is not in a position to assess progress on these action items because the federal government cannot direct nonfederal entities to complete them. Therefore, these 17 action items do not contain measures of performance against which to measure progress. Although the HSC’s 1- and 2-year progress reports stated that the HSC intended to continue and intensify its work with nonfederal entities, the 2-year progress report does not have any information on work conducted on these 17 action items nor is their status reported. Examples of the 17 action items intended for nonfederal entities include the following: State, local, and tribal pandemic preparedness plans should address the implementation and enforcement of isolation and quarantine, the conduct of mass immunization programs, and provisions for release or exception. States should ensure that pandemic response plans adequately address law enforcement and public safety preparedness across the range of response actions that may be implemented, and that these plans are integrated with authorities that may be exercised by federal agencies and other state, local, and tribal governments. Although there is no information on these two action items in the HSC’s 2- year progress report, we reported in June 2008 that HHS had led a multidepartment effort to review pertinent parts of states’ influenza pandemic plans in 22 priority areas, and had provided feedback to states in November 2007. These priority areas included mass vaccination, law enforcement, and community containment, which includes community- level interventions designed to limit the transmission of a pandemic virus with emphasis on isolation and quarantine, closing schools, and discouragement of large public gatherings, at a minimum. This HHS-led review found major gaps in these three areas, which are activities cited in the two action items noted above. Since our 2008 report, HHS led a second interagency assessment of state influenza pandemic plans, which found that although states have made important progress toward preparing for combating an influenza pandemic, most states still have major gaps in their pandemic plans. So, for these two action items, HHS had gathered information on their status for other purposes and made it publicly available on www.flu.gov, but this information was not reported in the HSC’s progress reports. The Plan includes response-related action items that have a measure of performance or time frame associated with a pandemic or animal outbreak. In a response-related section in the HSC’s 2-year progress report, the report states that although neither a pandemic nor animal outbreak had occurred in the United States as of October 2008, the federal government had exercised many of the capabilities called for in these action items. We found that the Plan does not describe the specific circumstances, such as the type or severity of an outbreak or pandemic, under which the response-related action items would be undertaken. In addition, for response-related action items in which the trigger is not an outbreak or pandemic, the Plan does not describe the types of information that would be needed in order to make a decision to implement the action items. For example, one of the action items, shown in table 1 below, calls for DOS and DHS to impose restrictions on travel into the United States as appropriate. However, a senior DOS official told us that the agency does not have triggers for when these travel restrictions would be implemented. As we have previously reported, in preparing for, responding to, and recovering from any catastrophic disaster, roles and responsibilities must be clearly defined, effectively communicated, and well understood in order to facilitate rapid and effective decision making. In an August 2009 report on U.S. preparations for the 2009 H1N1 pandemic, the President’s Council of Advisors on Science and Technology highlighted the need for quantitative triggers and recommended that federal agencies adopt structured frameworks for key decision making by incorporating scenarios and specific trigger points for action. As of late May 2009, an official from only one of the four selected agencies responsible for the 10 response-related action items in our sample, the Deputy Associate Director for Security Policy at DOT, stated that the 2009 H1N1 outbreak had triggered an action item from this group (5.3.5.3) for which the agency was responsible. For the remaining nine action items, officials from all four agencies noted that none of the action items for which their agency had responsibility were relevant to the H1N1 outbreak at that time. The Plan states that the operational details on how to carry out actions in support of the Strategy will be included in departmental pandemic plans. Federal agencies may have operational plans or other existing guidance that would specify the information needed to determine whether to undertake the response-related action items during a pandemic. However, the Plan itself gives no indication of whether these plans or guidance actually contain such information, or whether the information that would be needed has been determined in advance. The HSC reported in October 2008 that about 75 percent of the 324 action items in the Plan were designated as complete based on its criteria of whether the measures of performance were achieved. Among the 60 action items in our sample, 49 had been designated as complete, 3 designated as in-progress, and 8 had no reported status. For a number of reasons, as stated in the following sections, it was difficult to determine the actual status of some of the 49 selected action items that were designated complete. As discussed earlier, according to the HSC’s progress reports, a determination that an action item is complete is based on whether the action item’s measure of performance is achieved. Our review found, however, that for more than half of the action items considered complete, the measures of performance did not fully address all the activities contained in their descriptions. In some instances, the HSC used information other than the measures of performance to report progress. All of the 49 action items designated as complete that we reviewed have both a description of activities to be carried out, and a measure of performance, which generally is used as an indicator to measure progress of completion by responsible parties in carrying out what is specified in its respective description. We found that the types of performance measures for selected action items varied widely. For instance, measures of performance may call for processes to be developed and implemented, changes to be effected in foreign countries, or products such as guidance or a vaccine to be developed. As we reported in 2007, most of the Plan’s measures of performance for action items are focused on activities, such as disseminating guidance, and are not always clearly linked to the goals and objectives described in the Strategy and Plan. In these cases, it is difficult to determine whether the goals and objectives have been achieved. We found that the selected action items’ measures of performance addressed the descriptions of their respective action items to varying degrees. Examples can be seen in table 2. All of the 49 selected action items’ measures of performance either fully or partially addressed their respective descriptions. In 23 of the 49 selected action items that were designated as complete in the HSC’s 2-year progress report, we found that the measures of performance fully addressed the respective descriptions for the action items. For the remaining 26 action items, the measures of performance partially addressed their respective descriptions. For example, as noted in table 2, the description for one of the action items calls for DOD to conduct an assessment of military support related to transportation and borders that could be requested during a pandemic. While the measure of performance did not include this activity, the HSC nevertheless designated the action item as complete. Our review also found that for 22 of the 49 selected action items designated as complete in the HSC’s 2-year progress report, the progress summaries fully addressed how the measures of performance were achieved, thereby supporting the HSC’s designation of complete for these action items. However, for the other 27 selected action items designated as complete, the progress summaries did not fully address how the measures of performance were achieved. Specifically, in 18 of the 27 selected action items, the HSC’s summaries addressed some but not all of the activities specified in the respective measures of performance, and for the remaining 9 action items, the summaries did not address at all how the measures of performance were achieved. In these instances, we found that the HSC either used the action item’s description, or used information that was not reflected in either the description or measure of performance, to assess completion. Table 3 below includes two examples where the HSC summaries partially addressed or did not address the action item’s measure of performance. Of the 49 selected action items designated as complete, 11 have measures of performance that cannot be accomplished solely by responsible entities tasked to work on these action items. Five of these require other countries’ assistance while the remaining six require nondesignated entities’ participation in order for the action items to be completed. For these 11 action items, the responsible federal agencies are not able to achieve the measures of performance for these action items on their own, but can provide assistance, such as funding and guidance, to encourage completion of these action items by others. For example, one of the action items below calls for DOS to promote, among other things, rapid reporting of influenza cases by other nations; the measure of performance is that all high-risk countries improve their capacity for rapid reporting. Even though this outcome is beyond DOS’s ability to achieve on its own, the action item was considered complete, and no explanation was provided. Some examples of the measures of performance that cannot be entirely fulfilled by the agencies and organizations in the United States include the following: DOS, in coordination with other agencies, shall work on a continuing basis through the Partnership and through bilateral and multilateral diplomatic contacts to promote transparency, scientific cooperation, and rapid reporting of avian and human influenza cases by other nations within 12 months. Measure of performance: All high-risk countries actively cooperating in improving capacity for transparent, rapid reporting of outbreaks. USDA shall provide technical assistance to priority countries to increase safety of animal products by identifying potentially contaminated animal products, developing screening protocols, regulations, and enforcement capacities that conform to the World Organisation for Animal Health (OIE) avian influenza standards for transboundary movement of animal products, within 36 months. Measure of performance: All priority countries have protocols and regulations in place or in process. We previously reported in June 2007 that DOS officials confirmed that the following action item, which was designated as complete in the HSC’s 2- year progress report, had a measure of performance that was difficult to address because the agency did not have the means to accurately estimate the effective reach or impact of their efforts on target audiences. As a result, this action item could only be achieved with the participation from nondesignated entities. DOS, in coordination with HHS, the United States Agency for International Development (USAID), USDA, DOD, and DHS, shall lead an interagency public diplomacy group to develop a coordinated, integrated, and prioritized plan to communicate U.S. foreign policy objectives relating to our international engagement on avian and pandemic influenza to key stakeholders (e.g., the American people, the foreign public, nongovernmental organizations, international businesses), within 3 months. Measure of performance: Number and range of target audiences reached with core public affairs and public diplomacy messages, and impact of these messages on public responses to avian and pandemic influenza. We found that work has continued on some of the selected action items the HSC designated as complete, including providing additional guidance, training and exercises. In some instances, continued efforts may be warranted—for example, when new information or circumstances might require an update of guidance. In addition, according to the HSC’s progress reports, a determination of “complete” indicates that the measure of performance has been achieved but does not necessarily mean that work on the action items has ended; the work is ongoing in many cases. Our analysis of the 1-year and 2-year progress reports confirmed that there was additional work conducted for 20 of the 34 selected action items initially designated complete as of the 1-year report. For example, one of the action items called for national spokespersons to coordinate and communicate messages to the public. The HSC’s 1-year report stated that for this action item, which was designated as complete, the federal government had engaged various spokespersons by providing training for risk communications and supporting community and individual actions to reduce illness and death. In the HSC’s 2-year report, the HSC provided new information on an influenza pandemic communications plan, which included messaging and spokesperson development components and numerous regional and local crisis and emergency risk communications trainings. In another example, an action item required all hospitals and health facilities funded by HHS, DOD, and the Department of Veterans Affairs (VA) to develop and publicly disseminate guidance materials on infection control. In its 1-year report, the HSC provided information on guidance documents issued by HHS on hospital infection control and VA’s national infection prevention campaign, whereas in its 2-year report, the HSC reported on new information related to two DOD guidance documents on preparation and response health policy and clinical and public health guidelines for the military health system. Action items 6.1.1.3., 7.3.2.1., and 9.1.2.2. implementation of a national animal vaccination program. An official from the Food and Agriculture Organization (FAO) also confirmed that additional work had continued for this action item in conjunction with the World Organisation for Animal Health (OIE) in developing joint strategies for highly pathogenic avian influenza. In 2007, we recommended that the HSC establish a specific process and time frame for updating the Plan to include a number of features we identified as important elements of a national strategy, including a process for monitoring and reporting on progress. While the Plan’s assumptions are not matched to the 2009 H1N1 pandemic, making some of the action items less relevant to current circumstances, the process for monitoring and reporting on the status of pandemic plans is not particular to any one type of pandemic scenario. The lessons learned from developing and monitoring the 2006 Plan should be relevant to all future pandemic planning efforts. In particular, although the HSC, which is supported by the NSS, has monitored progress on the Plan, it has not yet established a process for updating the Plan, as previously reported, and we have found additional areas for improvement in how the Plan has been monitored and the status of action items assessed. For one thing, the NSS and the responsible federal agencies have not been monitoring or reporting on action items in the Plan intended for state and local governments and other nonfederal entities, even though, in some instances, they have information available that would allow them to do so, such as the interagency assessment of state pandemic plans led by HHS. Given that the Plan states that in a pandemic the primary response will come from states and communities, this information should be in the progress reports, notwithstanding that it may be available in other sources. Similarly, while agency operational plans or guidance may provide the information under which the response- related action items would be undertaken, the Plan itself contains no such information. As a result, it is unclear whether the information that would be needed to activate the response-related action items in the Plan has been identified or worked out in advance. The HSC designated about 75 percent of the action items in the Plan as completed, as of October 2008. However, based on our review of 49 of the 245 action items designated as complete, it is difficult to determine the actual status of some of the selected action items designated as complete. The HSC and the responsible federal agencies generally relied on the measures of performance to assess progress in completing the selected action items. However, for more than half of the selected action items, we found that the measures of performance did not fully reflect all of the activities called for in the action items’ descriptions. While the HSC’s progress summaries sometimes corrected for this by referring to activities in the action item’s description omitted from the measures of performance, future progress reports would benefit from using measures of performance that are more consistent with the action items’ descriptions. This would, in turn, provide a more consistent and complete basis for describing progress in implementing the Plan. Finally, although the administration has prepared an additional planning document tailored specifically to the 2009 H1N1 pandemic, the Strategy and Plan will still be needed for future events. Because most of the action items were to be completed by May 2009, the Plan should be updated, as we earlier recommended, to include all the elements identified in our 2007 report and to take into account the lessons learned from the 2009 H1N1 pandemic. As part of the process for monitoring the progress made in preparing the nation for an influenza pandemic, the Plan should address the monitoring and assessment improvements we identified in this report. To improve how progress is monitored and completion is assessed under the Plan and in future updates of the Plan, the HSC should instruct the NSS to work with responsible federal agencies to develop a monitoring and reporting process for action items that are intended for nonfederal entities, such as state and local governments; identify the types of information needed to decide whether to carry out the response-related action items; and develop measures of performance that are more consistent with the descriptions of the action items. We provided a draft of this report to the Homeland Security Council (HSC), and to the Secretaries of Agriculture, Defense, Health and Human Services, Homeland Security, State, and Transportation for their review and comment. In written comments on our draft report, the Principal Deputy Counsel to the President, on behalf of the administration, stated that our report is one notable source of suggestions for improving national pandemic planning, and that the administration would give consideration to our findings and recommendations as it continues its work in this area. The HSC also provided us with technical comments, which we incorporated as appropriate. HHS noted in its comments that important questions and analysis that underpin our findings and recommendations were not presented or addressed in this report, including whether (1) the original Plan was adequate, (2) the priorities selected were appropriate, (3) the measures selected for monitoring progress were appropriate, and (4) the monitoring parameters selected were measurable or even achievable. We agree that these are important questions. However, the objectives of this report were to (1) determine how the HSC and responsible federal agencies monitor the progress and completion of the Plan’s action items and (2) assess the extent to which selected action items have been completed. As such, we believe that we have in fact addressed the issues raised by HHS in this report in our examination of action items and related measures of performance, as well as in our prior recommendation that has not yet been implemented to incorporate into future updates of the Plan the lessons learned from exercises and other events, such as the H1N1 pandemic. HHS also provided two other general comments. First, regarding our discussion related to the lack of details in the Plan on the information that would be used to activate the response-related action items, HHS stated that it would be inappropriate to set specific trigger points to activate specific responses because an influenza virus has an infinite range of potential characteristics, which are not predictable, and that flexibility is necessary. HHS further stated that it would be more appropriate to discuss the “types” of circumstances and responses that should be planned for. We agree that flexibility is necessary to assess the specific circumstances under which to implement the response-related action items in the Plan, given the changing nature of an influenza virus. We agree with HHS that the Plan should discuss the types of circumstances that should be planned for in a pandemic. We have made changes to the report to clarify this point. Second, with respect to our discussion of additional work conducted on selected action items designated as complete, HHS noted that preparedness is a continuous and iterative improvement process based on lessons learned, and that ongoing training and exercises should be iterative and adapt to lessons learned. We agree. As we noted in this report, in some instances, continued efforts on action items may be warranted—for example, when new information or circumstances might require an update of guidance. Our concern, however, is that it is unclear what additional work or progress had been made on these action items, since the HSC had designated them as complete. DHS stated that the information in our report is generally accurate and had no substantive comments on the content of the report. DHS further stated that while improvements can be made in the Plan as we outlined in our report, there has been significant work accomplished in pandemic preparedness as a direct result of the Plan. For example, DHS noted that significant collaboration at all levels of government and the private sector has occurred, which enabled a more efficient and coordinated response for the 2009 H1N1 pandemic. DOT provided us with technical comments, which we incorporated. DOD, DOS, and USDA informed us that they did not have any comments on the draft report. The White House, HHS, and DHS provided written comments on a draft of this report, which are reprinted in appendixes III, IV, and V, respectively. As agreed with your office, we plan no further distribution of this report until 30 days from its date, unless you publicly announce its contents earlier. At that time we will send copies to the HSC, Secretary of Agriculture, Secretary of Defense, Secretary of Health and Human Services, Secretary of Homeland Security, Secretary of State, Secretary of Transportation, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-6543 or steinhardtb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The objectives of this study were to (1) determine how the Homeland Security Council (HSC) and the responsible federal agencies are monitoring the progress and completion of the Implementation Plan for the National Strategy for Pandemic Influenza (Plan) action items, and (2) assess the extent to which selected action items have been completed, whether activity has continued on the selected action items reported as complete, and the nature of that work. We did not assess the response efforts for the 2009 H1N1 pandemic in this report, but we continue to monitor the outbreak and the federal government’s response. To address these objectives, we conducted an in-depth analysis of a random sample of 60 action items in the Plan. We drew a random sample from the 286 action items involving six federal agencies with primary responsibility for ensuring completion of the large majority (88 percent) of the 324 action items. These six agencies are the Department of Defense (DOD), Department of Health and Human Services (HHS), Department of Homeland Security (DHS), Department of State (DOS), Department of Transportation (DOT), and the Department of Agriculture (USDA). Of the 60 action items selected for our sample, the HSC reported that 49 were completed, 3 were in progress, and 8 had no status updates in its 2-year progress report. The purpose of this random sampling was not to be able to generalize our findings to the entire population of 286; rather, it was to produce a sample that had a distribution of items generally mirroring that of the overall population of 286 using the following variables so that the sample would include action items that represented (1) the six agencies with primary responsibility for implementing the Plan, (2) the three pillars in the Plan, (3) the presence of collaboration between federal agencies and nonfederal entities (i.e. state, local, and tribal entities, the private sector, international organizations, and nongovernmental organizations), and (4) various time frames of when they should be completed, which range from within 24 hours of an outbreak to 60 months from the release of the Plan in May 2006, among other time frames. We do not generalize the results of our analysis because the particular analytical steps we took across the selected action items varied and as a result there was no common underlying measure on which to generalize results to all of th action items in the Plan. In addition, we did not review all of the action items in the Plan in depth because our analyses involved multiple assessments for each action item, including the review of large volumes of agency documentation in determining the level of evidence for completi on of the action item. For both objectives, we interviewed officials and obtained documentat from the six federal agencies. We reviewed the HSC’s 6-month 2-year progress reports and the HSC’s 1-year summary report on the implementation of the action items in the Plan. In addition, we interv a senior HSC official from the previous administration and the Director of aff Medical Preparedness Policy for the White House National Security St (NSS) in the current administration responsible for overseeing the implementation of the Plan. We also relied on our prior pandemic work to inform our analysis. ion , 1-year, and To address the first objective on how the HSC and responsible federal agencies are monitoring the progress and completion of the Plan’s action items, we assessed information from interviews and documentation, such as the HSC’s progress reports, on how the HSC and the selected agencies monitored the progress and completion of all action items. We also requested information from the six agencies on how the NSS is currently overseeing the interagency process used for monitoring the implementation of action items in the Plan. We also interviewed representatives from nine nonfederal entities, such as the World Organisation for Animal Health (OIE) and the Denver Health Medical Center, which agency officials had identified as working collaboratively with them on four action items in our sample, and asked these representatives whether the agencies asked for information on the progress of implementing these action items. In addition, we reviewed the Plan and the HSC’s 2-year progress report to identify specific circumstances that would trigger the response-related action items that are activated by an animal outbreak or pandemic. We also collected information from the four selected agencies that had primary responsibility for the 10 response-related action items in our sample regarding criteria that would trigger these action items. To address the second objective, we analyzed the 49 action items in our random sample that the HSC’s 2-year progress report designated as complete. We also collected documentation and conducted interview with selected agency officials from the six agencies and a senior HSC official from the prior administration. To describe the extent to which action items had been completed, we analyzed information on the 49 selected action items in the Plan, the HSC progress reports, and supporting documentation provided by the six agencies with primary f the 49 action items to demonstrate how the responsibility for each o measures of performance were achieved based on the HSC’s criteria for completion. Specifically, we analyzed the 49 selected action items designated as complete to assess whether 1. the measures of performance fully addressed, partially addressed, or did not address their respective action item description; 2. the summaries contained in the HSC’s 2-year progress report fully addressed, partially addressed, or did not address how th performance were achieved; and 3. the measures of performance could be accomplished solely b responsible entities that are tasked to work on the action items. To evaluate the extent of work that has continued on the 49 ac in our sample that were designated as complete, and the nature of that work, we gathered information in two ways. First, we compared the H 1-year and 2-year progress reports for 34 selected action items designated as complete as of the 1-year report by analyzing each actio item’s summary in the HSC’s 1- and 2-year progress repor information on work conducted. Second, we asked the six agencies with n ts for any new primary responsibility if they had performed additional work after action items were designated as complete and, if so, to provide a brief desc of the nature of that work. For 27 of the 49 action items designated as complete, the agencies indicated that they had performed additional w after the action items were designated as complete. For 22 of those 27 action items, the agencies also specified the nature of the additional w GAO To ensure consistency and accuracy of our analysis, at least two analysts independently analyzed the data we received for the 49 select action items in our sample designated as complete and then compared their results. In cases where there were discrepancies, the two analysts reconciled their differences for a final response. Additionally, methodologists in GAO’s Applied Research an independent analysis and verification of our assessment by reviewing whether the measures of performance addressed its respective descriptio and whether the HSC summaries addressed how the measure performance were achieved for all 49 action items designated as complete. In cases where there were discrepancies between the analysts’ and methodologists’ teams, a joint reconci response. ork. liation was conducted for a final We conducted this performance audit from July 2008 to November 2009 in accordance with generally accepted government auditing standards. Those standards require that w appropriate evidence to provide a reasonable basis for our findings and e plan and perform the audit to obtain sufficient, conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Finalized action plans that outline goals be achieved and timeframes in which th will be achieved. e Asia-Pacific Economic Cooperation ources and regional expertise to fight onwide tabletop exercise, a Symposium on ses to be held in Beijing in April 2006 and the e Intervention (REDI) Center in Singapore), ive on Avian Influenza, and the U.S.- Avian Influenza Demonstration Project; and to expand the number of countries fully /or international technical agencies in the fight a, within 6 months. The Department of Health REDI Center in Singapore and Human Services (HHS) shall staff the within 3 months. U.S. government staff provided to REDI Center. The United States Departm United States Agency for I Partnership, shall support and the World Organisatio instrument to assess priori prevention, surveillance, a veterinary rapid response animal surveillance, diagn countries, within 9 months ent of Agriculture (USDA), working with the nternational Development (USAID) and the the Food and Agriculture Organization (FAO) n for Animal Health (OIE) to implement an ty countries’ veterinary infrastructure for nd control of animal influenza and increase capacity by supporting national capacities for ostics, training, and containment in at-risk . Per the OIE’s Performance, Vision and Strategy Instrument, assessment tools exercised and results communicated to the Partnership, and priority countries are developing, or have in place, an infrastructure capable of supporting their national prevention and response plans for avian or other animal influenza. Majority of registered U.S. citizens have access to accurate and current information on influenza. ments, to enable them to make informed riate personal measures. Briefing materials and an action plan in place for engaging with relevant federal, state, tribal, and local authorities. n the United States consistent with U.S. ons within 6 months. All priority countries and partner organizations have received relevant information on influenza vaccines and application strategies. The Department of Justice (DOJ) and DOS, in coordination with HHS, shall consider whether the U.S. Government, in order to benefit from the protections of the Defense Appropriations Act, should seek to negotiate liability-limiting treaties or arrangements covering U.S. contributions to an international stockpile of vaccine and other medical countermeasures, within 6 months. Review initiated and decision rendered. USDA, in collaboration with FAO and OIE, shall develop and provide best-practice guidelines and technical expertise to countries that express interest in obtaining aid in the implementation of a national animal vaccination program, within 4 months. Interested countries receive g and other assistance within 3 months of their request. DOS, in coordination with other agencies, shall work on a continu basis through the Partnership and through bilateral and multilater diplomatic contacts to promote transparency, scientific cooperation, an rapid reporting of avian and human influenza cases by other nations within 12 months. All high-risk countries activ in improving capacity for transparent, rapid reporting of outbreaks. HHS shall support the World Health Organization (WHO) Secre enhance the early detection, identification and reporting of infectious disease outbreaks through the WHO’s Influenza Network and Glob al Outbreak and Alert Response Network within 12 months. Expansion of the network to regions not currently part of the network. HHS shall enhance surveillance and response to high pr disease, including influenza with pandemic potential, by training physicians and public health workers in disease surve illance, applied epidemiology and outbreak response at its Global Disease Detection Response Centers in Thailand and China and at the U.S.-China Collaborative Program on Emerging and Re-Emerging Infectious Diseases, within 12 months. 50 physicians and public health workers living in priority countries receive training in disease surveillance applied epidemiology and outbreak response. DOD, in coordination with DOS and with the cooperation of the host nation, shall assist with influenza surveillance of host nation population s in accordance with existing treaties and international agreements, within 24 months. Medical surveillance “watchboard ” expanded to include host nations. HHS shall develop and imple programs in basic laboratory techniques related to influenza sample preparation and diagnostics in priority countries within 9 months. 25 laboratory scientists trained in influenza sample preparation and diagnostics. HHS and USAID shall work with the WHO Secretariat and private sect partners, through existing bilateral agreements, to provide support for human health diagnostic laboratories by developing and giving assistance in implementing rapid international laboratory diagnostics protocols and standards in priority countries, within 12 months. 75 percent of priority countri improved human diagnostic laboratory capacity. DOD, in coordination with HHS, shall prioritize international DOD laboratory research efforts to develop, refine, and validate diagnostic methods to rapidly identify pathogens, within 18 months. Completion of prioritized research plan, resources identified, and tasks assigned across DOD medical research facilities. DOD shall work with priority nations’ military forces to assess existin ortable field assay laboratory capacity, rapid response teams, and p testing equipment, and fund essential commodities and training necessary to achieve an effective national military diagnostic capability within 18 months. Assessments completed, proposals accepted, and funding made available to priority countries. HHS and USAID shall develop, in coordination with the WHO Secretariat and other donor countries, rapid response protocols for use in responding quickly to credible reports of human-to-human transmission that may indicate the beginnings of an influenza pandemic, within 12 months. Adoption of protocols b stakeholders. USDA shall provide technical assistance to priority countries to increas safety of animal products by identifying potentially contaminated a products, developing screening protocols, regulations, and enforcem capacities that conform to OIE avian influenza standards for transboundary movement of animal products, within 36 months. All priority countries have protocols and regulations in place or in process. DOS, in coordination with HHS, USAID, USDA, DOD, and the Department of Homeland Security (DHS), shall lead an interagency public diplomacy group to develop a coordinated, integrated, and prioritized plan to communicate U.S. foreign policy objectives relating to our international engagement on avian and pandemic influenza to key stakeholders (e.g., the American people, the foreign public, NGOs, international businesses),within 3 months. Number and range of target audiences reached with core public affai rs and public diplomacy messages, and impact of these messages on public responses to avian and pandemic influenza. USDA, in coordination with DHS, the United States Trade Representative (USTR), and DOS, shall ensure that clear and coordinated messages are provided to international trading partners regarding animal disease outbreak response activities in the United States. Within 24 hours of an outbreak, appropriate messages will be shared with key animal/animal product trading partners. DOD, in coordination with DHS, the Department of Transportation(DOT), DOJ, and DOS, shall conduct an assessment of military su related to transportation and b orders that may be requested during a pandemic and develop a comprehensive contingency plan for Defens Support to Civil Authorities, within 18 months. Defense Support to Civil Authorities plan in place that addresses emergency transportation and border support. DHS, in coordination with DOT, the Department of Labor (DOL), the Office of Personnel Management, and DOS, shall disseminate workforce protection information to stakeholders, conduct outreach stakeholders, and implement a comprehensive program for all Federa transportation and border staff within 12 months. 100 percent of workforce has or has access to information on pandemic influenza risk and appropriate prot measures. DHS, DOT, and HHS, in coordination with transportation and border stakeholders, and appropriate state and local health authorities, shall develop aviation, land border, and maritime entry and exit protocols and/or screening protocols, and education materials for non-medical, front-line screeners and officers to identify potentially infected persons or cargo, within 10 months. Protocols and training materials developed and disseminated. DHS and HHS, in coordination with DOT, DOJ, and appropriate State and local health authorities, shall develop detection, diagnosis, quarantine, isolation, emergency medical services (EMS) transport, reporting, and enforcement protocols and education materials for travelers, and undocumented aliens apprehended at and between po of entry, who have signs or symptoms of pandemic influenza or w ho may have been exposed to influenza, within 10 months. Protocols developed and distributed to all ports of entry. Revised process for withdrawing permits of high-risk importers. Risk based protocols established and in use. USDA, DHS, and DOI, in coordination with DOS, HHS, and the Department of Commerce (DOC), shall conduct outreach and expand education campaigns for the public, agricultural stakeholders, wildlife trade community, and cargo and animal importers/exporters on import and export regulations and influenza disease risks, within 12 months. 100 percent of key stakeholders are aware of current import and export regulations and penalties for non- compliance. DOS and DHS, in coordination with DOT, DOC, HHS, the Depa the Treasury (Treasury), and USDA, shall work with foreign counterp to limit or restrict travel from affected regions to the United States, appropriate, and notify host government(s) and the traveling public. Measures imposed within 24 hours of th decision to do so, after appropriate notifications made. DHS, in coordination with DOS, USDA and DOI, shall provide countries with guidance to increase scrutiny of cargo and other imported items through existing programs, such as the Container Security Initiative, and impose country-based restrictions or item-specific embargoes. Guidance, which may incl on restrictions, is provided for increased scrutiny of cargo and other imported items, within 24 hours upon notifi an outbreak. DHS, in coordination with USDA, DOS, DOC, DOI, and shippers, sha rapidly implement and enforce cargo restrictions for export or import potentially co ntaminated cargo, including embargo of live birds, and notify international partners/shippers. Measures implemented within 6 ho decision to do so. DHS, if needed, will implement contingency plans to maintain border control during a period of pandemic influenza induced mass migration. Contingency plan activated within 24 hours of notification. All regulatory waivers as needed balance need to expedite services with safety. DOT, in coordination with DHS, state, local, and tri the private sector, shall monitor system closures, assess effects on the transportation system, and implement contingency plans. bal governments, and Timely reports transmitted to DHS and other appropriate entities, containi relevant, current, and accurate information on the status of the transportation sector and impacts resulting from the pandemic; when appropriate, contingency plans implemented within no more than 24 hours of a report of a transportation sector impact or issue. DOT, in support of DHS and in coordination with other emergency support function (ESF) #1 support agencies, shall work closely with the private sector and state, local, and tribal entities to restore the transportation system, including decontamination and reprioritization of essential commodity shipments. Backlogs or shortages of essential commodities and goods quickly eliminated, returning production and consumption to prepandemic levels. DHS, in coordination with HHS, DOJ, DOT, and DOD, shall be prepared to provide emergency response element training (e.g., incident management, triage, security, and communications) and exercise assistance upon request of state, local, and tribal communities and public health entities within 6 months. Percentage of requests for training and assistance fulfilled. HHS, in coordination with DHS, DOD, and the Department of Ve Affairs (VA), shall develop a joint strategy defining the objectives, conditions, and mechanisms for deployment under which the National Disaster Medical System assets, U.S. Public Health Service Commissioned Corps, Epidemic Intelligence Service officers, and DOD/VA health care personnel and public health officers would be deployed during a pandemic, within 9 months. Interagency strategy completed and tested for the deployment of federalmedical personnel during a pande mic. HHS, in coordination with DHS, DOS, DOD, VA, and other federal partners, shall develop, test, and implement a federal government pu health emergency communications plan (describing the government’s strategy for responding to a pandemic, outlining U.S. international commitments and intentions, and reviewing containment measures that the government believes will be effective as well as those it regards as likely to be ineffective, excessively costly, or harmful) within 6 mo nths. Containment strateg response materials completed and published on www.pandemicflu.gov; communications plan implemented. HHS, in coordination with DHS, DOD, and the VA, and in collaboration with state, local, and tribal health agencies and the academic community, shall select and retain opinion leaders and medical experts to serve as credible spokespersons to coordinate and effectively communicate important and informative messages to the public, within 6 months. National spokespersons engaged in communications campaign. DOT, in cooperation with HHS, DHS, and DOC, shall develop model protocols for 9-1-1 call centers and public safety answering points that address the provision of information to the public, facilitate caller screening, and assist with priority dispatch of limited emergency m services, within 12 months. Model protocols deve disseminated to 9-1-1call centers and public safety answering points. Domestic vaccine manufacturing capacity of in place to produce 300 million courses vaccine within 6 months of development of a vaccine reference strain during a pandemic. DOT, in coordination with HHS, DHS, state, local, and tribal officials and other EMS stakeholders, shall develop suggested EMS pandemic influenza guidelines for statewide adoption that address: clinical standards, education, treatment protocols, decontamination procedures, medical direction, scope of practice, legal parameters, and other issues, within 12 months. EMS pandemic influenza guidelines completed. HHS, in coordination with DOD, VA, and in collaboration with state, territorial, tribal, and local partners, shall develop/refine mechanisms to: (1) track adverse events following vaccine and antiviral administration; (2) ensure that individuals obtain additional doses of vaccine, if necessary; and (3) define protocols for conducting vaccine- and antiviral-effectiveness studies during a pandemic, within 18 months. Mechanism(s) to track vaccine and antiviral medication coverage and adverse events developed; vaccine- a nd antiviral-effectiveness study protocolsdeveloped. HHS, in coordination with DHS and sector-specific agencies, DOS, DOD, DOL, and VA, shall establish a strategy for shifting priorities based on at-risk populations, supplies and efficacy of countermeasu against the circulating pandemic strain, and characteristics of the virus within 9 months. Clearly articulated p rocess in place for evaluating and adjusting prepandemic recommendations of groups receiving priority access to medical countermeasures. HHS shall support the renovation of existing U.S. manufacturing facilities that produce other Food and Drug Administration licensed cell- based vaccines or biologics and the establishment of new domestic cell- based influenza vaccine manufacturing facilities, within 36 months. Contracts awarded for renovation or establishment of domestic cell-based influenza vaccine manufacturing capaci ty. HHS, in coordination with DHS, shall develop and test new point-of- and laboratory-based rapid influenza diagnostics for screening and surveillance, within 18 months. care New grants and contracts awarded to researchers to develop and evaluate new diagnostics. HHS shall provide guidance to public health and clinical laboratories on the different types of diagnostic tests and the case definitions to use for influenza at the time of each pandemic phase. Guidelines for the curren pandemic alert phase will be disseminated within 3 months. Dissemination on www.pandemicflu.go v and through other channels of guidance on the use of diagnostic tests for H5N and other potential pandemic influenz subtypes. HHS, in coordination with DHS, DOD, and VA, and in collaboration with state, local, and tribal authorities, shall be prepared to collect, analyze, integrate, and report information about the status of hospitals and health care systems, healthcare critical infrastructure, and medical materiel requirements, within 12 months. Guidance provided to states and tribal entities on the use and modification o components of the National Hospital ies and Available Beds for Emergenc Disasters system for implementation at the local level. All HHS-, develop, test, and be prepared to implement infection control campaigns for pandemic influenza, within 3 months. Guidance materials on infection control developed and disseminated on www.pandemicflu.gov and through other channels. HHS, in coordination with DHS, VA, and DOD, shall develop and disseminate guidance that ex plains steps individuals can take to decrease their risk of acquiring or transmitting influenza infection dur a pandemic, within 3 months. Guidance disseminated onwww.pandemicflu.gov and through VA and DOD channels. USDA and DOI shall perform research to understand better how av er to influenza viruses circulate and are transmitted in nature, in ord improve information on biosecurity distributed to local animal owners, producers, processors, markets, auctions, wholesalers, distributors, retailers, and dealers, as well as wildlife management agencies, rehabilitators, and zoos, within 18 months. Completed research studies provide new information, or validate current the most useful information, on biosecurity measures to be taken to effectively prevent introduction, and limit or prevent sp read, of avian influenza viruses in domestic and captive animal populations. An effective avian influenza vaccine that can be delivered simultaneously to multiple birds ready for commercial development. DOI and USDA shall collaborate with state wildlife agencies, universities, and others to increase surveillance of wild birds, par migratory water birds and shore birds, in Alaska and other appropriate locations elsewhere in the United States and its territories, to detect influenza viruses with pandemic potential, including highly pathogenic avian influenza H5N1, and establish baseline data for wild birds, within 12 months. Reports detailing geographically appropriate w and influenza virus testing results. USDA shall work with state and tribal entities, and industry groups perform surveys of game birds and waterfowl raised in captivity, and implement surveillance of birds at auctions, swap meets, flea markets, and public exhibitions, within 12 months. Samples collected at 50 percent of th largest auctions, swap meets, flea markets, and public exhibitions held in at least five states or tribal entities believed to be at highest risk for an avian influen introduction. USDA shall activate plans to distribute veterinary medical countermeasures and materiel from the National Veterinary Stockpile (NVS) to federal, state, local, and tribal influenza outbreak responders within 24 hours of confirmation of an outbreak in animals of influenza with human pandemic potential, within 9 months. NVS mater of confirmation of an outbreak. DHS, in coordination with DOJ, HH pandemic influenza tabletop exercise for state, local, and tribal law enforcement/public safety officials that they can conduct in concert w public health and medical partners, and ensure it is distributed nationwide within 4 months. Percent of state, local, and tribal law enforcement/public safety agencies that have received the pandemic influenza tabletop exercise. DHS, in coordination with DOJ, DOD, DOT, HHS, and other appropriate federal sector-specific agencies, shall engage in contingency plan ning and related exercises to ensure they are prepared to sustain EMS , fire, emergency management, public works, and other emergency response functions during a pandemic, within 6 months. Completed exercise(s)) for supporting EMS, fire, emergency management, public works, and other emergency response functions. DHS, in coordination with states, localities and tribal entities, shall support private sector preparedness with education, exercise, training, and information sharing outreach programs, within 6 months. Preparedness exercises established with private sector partners in all states and U.S. territories. DHS shall develop and operate a national-level monitoring and information sharing system for core essential services to provide status updates to critical infrastructure dependent o n these essential services, and aid in sharing real-time impact information, monitoring actions, and prioritizing national support efforts for preparedness, response, and recovery of critical infrastructure sectors within 12 months. National-level critical infrastructure monitoring and information-sharing system established and operational. Data are from the Implementation Plan for the Nation al Strategy for Pandemic Influenza. 24 action items, of ogress, and 8 with no reported status. sons why it was difficult to determine the actua ere designated as complete. In addition to the contact named above, Sarah Veale, Assistant Director; In addition to the contact named above, Sarah Veale, Assistant Director; Maya Chakko; Susan Sato; David Fox; Melissa Kornblau; Kara Marshall; Maya Chakko; Susan Sato; David Fox; Melissa Kornblau; Kara Marshall; Mark Ryan; David Dornisch; Andrew Stavisky; and members of GAO’s Mark Ryan; David Dornisch; Andrew Stavisky; and members of GAO’s Pandemic Working Group made key contributions to this report. Pandemic Working Group made key contributions to this report. Influenza Pandemic: Key Securities Market Participants Are Making Progress, but Agencies Could Do More to Address Internet Congestion and Encourage Readiness. GAO-10-8. Washington, D.C.: October 26, 2009. Influenza Pandemic: Gaps in Pandemic Planning and Preparedness Need to Be Addressed. GAO-09-909T. Washington, D.C.: July 29, 2009. Influenza Pandemic: Greater Agency Accountability Needed to Protect Federal Workers in the Event of a Pandemic. GAO-09-783T. Washington, D.C.: June 16, 2009. Influenza Pandemic: Increased Agency Accountability Could Help Protect Federal Employees Serving the Public in the Event of a Pandemic. GAO-09-404. Washington, D.C.: June 12, 2009. Influenza Pandemic: Continued Focus on the Nation’s Planning and Preparedness Efforts Remains Essential. GAO-09-760T. Washington, D.C.: June 3, 2009. Influenza Pandemic: Sustaining Focus on the Nation’s Planning and Preparedness Efforts. GAO-09-334. Washington, D.C.: February 26, 2009. Influenza Pandemic: HHS Needs to Continue Its Actions and Finalize Guidance for Pharmaceutical Interventions. GAO-08-671. Washington, D.C.: September 30, 2008. Influenza Pandemic: Federal Agencies Should Continue to Assist States to Address Gaps in Pandemic Planning. GAO-08-539. Washington, D.C.: June 19, 2008. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington, D.C.: June 13, 2008. Influenza Pandemic: Efforts Under Way to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic. GAO-08-92. Washington, D.C.: December 21, 2007. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Influenza Pandemic: Federal Executive Boards’ Ability to Contribute t Pandemic Preparedness. GAO-07-1259T. Washington, D.C.: September 2 2007. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadersh Roles and Improve Pandemic Planning. GAO-07-1257T. Washington, D September 26, 2007. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Emergency Management Assistance Compact: Enhancing EMAC Collaborative and Administrative Capacity Should Impr Disaster Response. GAO-07-854. Washington, D.C.: June 29, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Avian Influenza: USDA H Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. as Taken Important Steps to Prepare for The Federal Wo Federal Executive Boards’ Ability to Contribute to Emergency Operations. GAO-07-515. Washington, D.C.: May 4, 2007. rkforce: Additional Steps Needed to Take Advantage of Financial Market Preparedness: Significant Progress Ha but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Influenza Pandemic: DOD Has Taken Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: Sept 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006.
The current H1N1 pandemic highlights the threat posed to our nation by an influenza pandemic. The previous administration's Homeland Security Council (HSC) issued the Implementation Plan for the National Strategy for Pandemic Influenza (Plan) in May 2006 to help address a pandemic. The Government Accountability Office (GAO) was asked to (1) determine how the HSC and responsible federal agencies monitor the progress and completion of the Plan's action items; and (2) assess the extent to which selected action items have been completed. To do this, GAO interviewed officials from the HSC and the six federal agencies responsible for implementing most of the Plan, and analyzed a random sample of 60 action items. While this report does not assess the response efforts for the H1N1 pandemic, GAO continues to monitor the outbreak and the federal response. To oversee agencies' progress in implementing the Plan's action items, the HSC, which is supported by the White House National Security Staff in this administration, convenes regular interagency meetings, asks agencies for summaries of progress; and leads the interagency process that monitors the progress of the Plan. Officials from the six agencies stated that they monitor action items tasked to more than one agency by selecting one or two agencies to report a consolidated summary of progress, approved by each responsible agency, to the HSC. However, neither the HSC nor the agencies monitor or report on the 17 action items intended for nonfederal entities, including, for example, action items asking state, local, and tribal entities to ensure their preparedness plans address mass immunization, even though the information may have been available from other sources, such as the interagency review of state pandemic plans led by the Department of Health and Human Services. In addition, the Plan does not describe the types of information needed to carry out the Plan's response-related action items, although agencies may have operational plans or other existing guidance that would provide this information. The HSC reported in October 2008 that the majority of the 324 action items were designated as complete. However, GAO's review of 60 action items found that it was difficult to determine the actual status of some of the 49 designated as complete. All of the action items reviewed have both a description of activities to be carried out and a measure of performance, which the HSC stated that it used to assess completion. However, for more than half of the action items considered complete, the measures of performance do not fully address all of the activities contained in their descriptions. While the HSC's progress summaries sometimes corrected for this by either referring to activities in the action item's description or some other information not reflected in either the measure of performance or description, future progress reports would benefit from using measures of performance that are more consistent with the action items' descriptions. The Plan is predicated on a type of pandemic different in severity and origin than the current H1N1 pandemic, but it is serving as the foundation for the response to the outbreak, supplemented by an additional plan tailored specifically to the characteristics of the H1N1 pandemic. Nevertheless, the National Strategy for Pandemic Influenza and Plan will still be needed for future events as most of the action items in the Plan were to be completed by May 2009. As recommended in earlier GAO work, but not yet implemented, the Plan should be updated to take into account certain missing elements and lessons learned from the H1N1 pandemic; the update should also address the monitoring and assessment improvements GAO identified in this report.
To determine which federal government programs and functions should be designated high risk, we use our guidance document, Determining Performance and Accountability Challenges and High Risks.consider qualitative factors, such as whether the risk involves public health or safety, service delivery, national security, national defense, economic growth, or privacy or citizens’ rights; or could result in significantly impaired service, program failure, injury or loss of life, or significantly reduced economy, efficiency, or effectiveness. We also consider the exposure to loss in monetary or other quantitative terms. At a minimum, $1 billion must be at risk, in areas such as the value of major assets being impaired; revenue sources not being realized; major agency assets being lost, stolen, damaged, wasted, or underutilized; potential for, or evidence of, improper payments; and presence of contingencies or potential liabilities. Before making a high- risk designation, we also consider corrective measures planned or under way to resolve a material control weakness and the status and effectiveness of these actions. Since 1990, more than one-third of the areas previously designated as high risk have been removed from the High Risk List because sufficient progress was made in addressing the problems identified. Nonetheless, 11 issues have been on the High Risk List since the 1990s and 6 of these were on our original list of 14 areas in 1990. Our experience with the high-risk series over the past 25 years has shown that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency leaders grounded in the five criteria for removal from the High Risk List, as well as any needed congressional action. The five criteria for removal are Leadership Commitment. Demonstrated strong commitment and top leadership support. Capacity. Agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root cause, identifies solutions, and provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. Figure 1 shows the five criteria for removal as a designated high-risk area and examples of actions taken by agencies in response. In designating VA as a high-risk area, we categorized our concerns about VA’s ability to ensure the timeliness, cost-effectiveness, quality, and safety of veterans’ health care, into five broad areas: (1) ambiguous policies and inconsistent processes, (2) inadequate oversight and accountability, (3) information technology challenges, (4) inadequate training for VA staff, and (5) unclear resource needs and allocation priorities. We have made numerous recommendations that aim to address weaknesses in VA’s management and oversight of its health care system. Although VA has taken actions to address some of them, more than 100 recommendations have yet to be fully resolved, including recommendations related to the following five broad areas of concern: Ambiguous policies and inconsistent processes. Ambiguous VA policies lead to inconsistency in the way VA facilities carry out processes at the local level. In numerous reports, we have found that this ambiguity and inconsistency may pose risks for veterans’ access to VA health care, or for the quality and safety of VA health care they receive. For example, in December 2012, we found that unclear policies led staff at VA facilities to inaccurately record the required dates for appointments and to inconsistently track new patients waiting for outpatient medical appointments at VA facilities. These practices may have delayed the scheduling of veterans’ outpatient medical appointments and may have increased veterans’ wait times for accessing care at VA facilities. In some cases, we found that staff members were manipulating medical appointment dates to conform to VA’s timeliness guidelines, which likely contributed further to the inaccuracy of VA’s wait-times data for outpatient medical appointments. Without accurate data, VA lacks assurance that veterans are receiving timely access to needed health care. In our November 2014 report, we found that VA policies lacked clear direction for how staff at VA facilities should document information about veteran suicides as part of VA’s behavioral health autopsy program (BHAP). The BHAP is a national initiative to collect demographic, clinical, and other information about veterans who have died by suicide and use it to improve the department’s suicide prevention efforts. In a review of a sample of BHAP records from five VA facilities, we found that more than half of the records had incomplete or inaccurate information. The lack of reliable data limited the department’s opportunities to learn from past veteran suicides and ultimately diminished VA’s efforts to improve its suicide prevention activities. We have also identified gaps in VA policies related to facilities’ response to adverse events—clinical incidents that may pose the risk of injury to a patient as the result of a medical intervention or the lack of an appropriate intervention, such as a missed or delayed diagnosis, rather than due to the patient’s underlying medical condition. Specifically, we found that VA policies were unclear as to how focused professional practice evaluations (FPPE) should be documented, particularly what information should be included. An FPPE is a time-limited evaluation during which a VA facility assesses a provider’s professional competence when a question arises regarding the provider’s ability to provide safe, quality patient care. In our December 2013 report, we found that gaps in VA’s FPPE policy may have hindered VA facilities’ ability to appropriately document the evaluation of a provider’s skills, support any actions initiated, and track provider-specific incidents over time. Inadequate oversight and accountability. We also have found weaknesses in VA’s ability to hold its health care facilities accountable and ensure that identified problems are resolved in a timely and appropriate manner. Specifically, we have found that (1) certain aspects of VA facilities’ implementation of VA policies are not routinely assessed by the department; (2) VA’s oversight activities are not always sufficiently focused on its facilities’ compliance with applicable requirements; and (3) VA’s oversight efforts are often impeded by its reliance on facilities’ self-reported data, which lack independent validation and are often inaccurate or incomplete. In a July 2013 report, for example, we found that VA needed to take action to improve the administration of its provider performance pay and award systems. In that report, we found that VA had not reviewed performance goals set by its facilities for providers and, as a result, concluded that VA did not have reasonable assurance that the goals created a clear link between performance pay and providers’ performance in caring for veterans. At four VA facilities included in our review, performance pay goals covered a range of areas, such as clinical competence, research, teaching, patient satisfaction, and administration. Providers who were eligible for performance pay received it at all four of the facilities we reviewed, despite at least one provider in each facility having personnel actions taken against them related to clinical performance in the same year. Such personnel actions resulted from issues including failing to read mammograms and other complex images competently, practicing without a current license, and leaving residents unsupervised during surgery. In March 2014, we found that VA lacked sufficient oversight mechanisms to ensure that its facilities were complying with applicable requirements and not inappropriately denying claims for non-VA care. Specifically, the March 2014 report cited noncompliance with applicable requirements for processing non-VA emergency care claims for a sample we reviewed. The noncompliance at four VA facilities led to the inappropriate denial of about 20 percent of the claims we reviewed and the failure to notify almost 65 percent of veterans whose claims we reviewed that their claims had been denied. We found VA’s field assistance visits, one of the department’s primary methods for monitoring facilities’ compliance with applicable requirements, to be lacking. In these annual on-site reviews at a sample of VA facilities, VA officials were to examine the financial, clinical, administrative, and organizational functions of staff responsible for processing claims for non-VA care; however, we found that these visits did not examine all practices that could lead VA facilities to inappropriately deny claims. Further, although VA itself recommended that managers at its facilities audit samples of processed claims to determine whether staff processed claims appropriately, the department did not require VA facilities to conduct such audits, and none of the four VA facilities we visited were doing so. In a September 2014 report and in three previous testimonies for congressional hearings, we identified weaknesses in VA’s oversight of veterans’ access to outpatient specialty care appointments in its facilities. VA officials told us they use data reported by VA facilities to monitor how the facilities are performing in meeting VA’s guideline of completing specialty care consults—requests from VA providers for evaluation or management of a patient for a specific clinical concern, or for a specialty procedure, such as a colonoscopy—within 90 days. We found cases where staff had incorrectly closed a consult even though care had not been provided, and found that VA does not routinely audit consults to assess whether its facilities are appropriately managing them and accurately documenting actions taken to resolve them. Instead, we found that VA relied largely on facilities’ self-certification that they were doing so. Information technology challenges. In recent reports, we also have identified limitations in the capacity of VA’s existing information technology (IT) systems. Of particular concern is the outdated, inefficient nature of certain systems, along with a lack of system interoperability—the ability to exchange information—which presents risks to the timeliness, quality, and safety of VA health care. For example, we have reported on VA’s failed attempts to modernize its outpatient appointment scheduling system, which is about 30 years old. Among the problems cited by VA staff responsible for scheduling appointments are that the system requires them to use commands requiring many keystrokes and that it does not allow them to view multiple screens at once. Schedulers must open and close multiple screens to check a provider’s or a clinic’s full availability when scheduling a medical appointment, which is time-consuming and can lead to errors. VA undertook an initiative to replace its scheduling system in 2000 but terminated the project after spending $127 million over 9 years, due to weaknesses in project management and a lack of effective oversight. The department has since renewed its efforts to replace its appointment scheduling system, including launching a contest for commercial software developers to propose solutions, but VA has not yet purchased or implemented a new system. In 2014, we found that interoperability challenges and the inability to electronically share data across facilities led VA to suspend the development of a system that would have allowed it to electronically store and retrieve information about surgical implants (including tissue products) and the veterans who receive them nationwide. Having this capability would be particularly important in the event that a manufacturer or the Food and Drug Administration (FDA) recalled a medical device or tissue product because of safety concerns. In the absence of a centralized system, at the time of our report VA clinicians tracked information about implanted items using stand-alone systems or spreadsheets that were not shared across VA facilities, which made it difficult for VA to quickly determine which patients may have received an implant that was subject to a safety recall. Further, as we have reported for more than a decade, VA and the Department of Defense (DOD) lack electronic health record systems that permit the efficient electronic exchange of patient health information as military servicemembers transition from DOD to VA health care systems. The two departments have engaged in a series of initiatives intended to achieve electronic health record interoperability, but accomplishment of this goal has been continuously delayed and has yet to be realized. The ongoing lack of electronic health record interoperability limits VA clinicians’ ability to readily access information from DOD records, potentially impeding their ability to make the most informed decisions on treatment options, and possibly putting veterans’ health at risk. One location where the delays in integrating VA’s and DOD’s electronic health records systems have been particularly burdensome for clinicians is at the Captain James A. Lovell Federal Health Care Center (FHCC) in North Chicago, the first planned fully integrated federal health care center for use by both VA and DOD beneficiaries. We found in June 2012 that due to interoperability issues, the FHCC was employing five dedicated, full-time pharmacists and one pharmacy technician to conduct manual checks of patients’ VA and DOD health records to reconcile allergy information and identify possible interactions between drugs prescribed in VA and DOD systems. Inadequate training for VA staff. In a number of reports, we have identified gaps in VA training that could put the quality and safety of veterans’ health at risk. In other cases, we have found that VA’s training requirements can be particularly burdensome to complete, particularly for VA staff who are involved in direct patient care. In a November 2014 report that examined VA’s monitoring of veterans with major depressive disorder (MDD) and whether those who are prescribed an antidepressant receive recommended care, we determined that VA data may underestimate the prevalence of MDD among veterans and that a lack of training for VA clinicians on diagnostic coding may contribute to the problem. In a review of medical record documentation for a sample of veterans, we found that VA clinicians had not always appropriately coded encounters with veterans they diagnosed as having MDD, instead using a less specific diagnostic code for “depression not otherwise specified.” VA’s data on the number of veterans with MDD are based on the diagnostic codes associated with patient encounters; therefore, coding accuracy is critical to assessing VA’s performance in ensuring that veterans with MDD receive recommended treatments, as well as measuring health outcomes for these veterans. In a May 2011 report, we found that training for staff responsible for cleaning and reprocessing reusable medical equipment (RME), such as endoscopes and some surgical instruments, was lacking. Specifically, VA had not specified the types of RME for which training was required; in addition, VA provided conflicting guidance to facilities on how to develop this training. Without appropriate training on reprocessing, we found that VA staff may not be reprocessing RME correctly, posing patient safety risks. In our October 2014 report on VA’s implementation of a new, nationally standardized nurse staffing methodology, staff from selected VA facilities responsible for developing nurse staffing plans told us that VA’s individual, computer-based training on the methodology was time-consuming to complete and difficult to understand. These staff members said they had difficulty finding the time to complete it while also carrying out their patient care responsibilities. Many suggested that their understanding of the material would have been greatly improved with an instructor-led, group training course where they would have an opportunity to ask questions. Unclear resource needs and allocation priorities. In many of our reports, we have found gaps in the availability of data required by VA to efficiently identify resource needs and to ensure that resources are effectively allocated across the VA health care system. For example, in October 2014, we found that VA facilities lacked adequate data for developing and executing nurse staffing plans at their facilities. Staffing plans are intended to help VA facilities identify appropriate nurse staffing levels and skill mixes needed to support high-quality patient care in the different care settings throughout each VA facility, and are used to determine whether their existing nurse workforce sufficiently meets the clinical needs of each unit, or whether facilities need to hire additional staff. At selected VA facilities, staff members responsible for developing and executing the nurse staffing plans told us that they needed to use multiple sources to collect and compile the data—in some cases manually. They described the process as time-consuming, potentially error-prone, and requiring data expertise they did not always have. In a May 2013 report, we found that VA lacked critical data needed to compare the cost-effectiveness of non-VA medical care to that of care delivered at VA facilities. Specifically, VA lacks a data system to group medical care delivered by non-VA providers by episode of care—all care provided to a veteran during a single office visit or inpatient stay. As a result, VA cannot efficiently assess whether utilizing non-VA providers is more cost-effective than augmenting its own capacity in areas with high non-VA health care utilization. In a September 2014 report, we identified concerns with VA’s management of its pilot dialysis program, which had been implemented in four VA-operated clinics. Specifically, we found that, five years into the pilot, VA had not set a timetable for the completion of its dialysis pilot or documented how it would determine whether the pilot was successful, including improving the quality of care and achieving cost savings. We also found that VA data on the quality of care and treatment costs were limited due to the delayed opening of two of the four pilot locations. Veterans who receive dialysis are one of VA’s most costly populations to serve, but VA has limited capacity to deliver dialysis in its own facilities, and instead refers most veterans to non-VA providers for this treatment. VA began developing its dialysis pilot program in 2009 to address the increasing number of veterans needing dialysis and the rising costs of providing this care through non-VA providers. VA has taken actions to address some of the recommendations we have made related to VA health care; however, there are currently more than 100 that have yet to be fully resolved, including recommendations related to the five broad areas of concern highlighted above. For example, to ensure that its facilities are carrying out processes at the local level more consistently—such as scheduling veterans’ medical appointments—VA needs to clarify its existing policies. VA also needs to strengthen oversight and accountability across its facilities by conducting more systematic, independent assessments of processes carried out at the local level, including how VA facilities are resolving specialty care consults and processing claims for non-VA care. We also have recommended that VA work with DOD to address the administrative burdens created by the lack of interoperability between their two IT systems. A number of our recommendations aim to improve training for staff at VA facilities, to address issues such as how staff are cleaning, disinfecting, and sterilizing reusable medical equipment, and to more clearly align training on VA’s new nurse staffing methodology with the needs of staff responsible for developing nurse staffing plans. Finally, we have recommended that VA improve its methods for identifying VA facilities’ resource needs and for analyzing the cost-effectiveness of VA health care. The recently enacted Veterans Access, Choice, and Accountability Act included a number of provisions intended to help VA address systemic weaknesses. For example, the law requires VA to contract with an independent entity to (1) assess its capacity to meet the needs of veterans who use the VA health care system, given their current and projected demographics, (2) examine VA’s clinical staffing levels and productivity, and (3) review VA’s IT strategies and business processes, among other things. The new law also establishes a 15-member commission, to be appointed primarily by bipartisan congressional leadership, which will examine how best to organize the VA health care system, locate health care resources, and deliver health care to veterans. It is critical for VA leaders to act on the findings of this independent contractor and congressional commission, as well as on those of VA’s Office of the Inspector General, GAO, and others, and to fully commit themselves to developing long-term solutions that mitigate risks to the timeliness, cost-effectiveness, quality, and safety of the VA health care system. It is also critical that Congress maintain its focus on oversight of VA health care. In the spring and summer of 2014, congressional committees held more than 20 hearings to address identified weaknesses in the VA health care system. Sustained congressional attention to these issues will help ensure that VA continues to make progress in improving the delivery of health care services to veterans. We plan to continue monitoring VA’s efforts to improve the timeliness, cost-effectiveness, quality, and safety of veterans’ health care. To this end, we have ongoing work focusing on topics such as veterans’ access to primary care and mental health services; primary care productivity; nurse recruitment and retention; monitoring and oversight of VA spending on training programs for health care professionals; mechanisms VA uses to monitor quality of care; and VA and DOD investments in Centers of Excellence—which are intended to produce better health outcomes for veterans and service members. An assessment of the status of VA health care’s high-risk designation will be done during our next update in 2017. Chairman Isakson, Ranking Member Blumenthal and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this statement were Jennie Apter, Jacquelyn Hamilton, and Alexis C. MacDonald. VA Health Care: Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing Accuracy of Suicide Data. GAO-15-55. Washington, D.C.: November 12, 2014. VA Health Care: Actions Needed to Ensure Adequate and Qualified Nurse Staffing. GAO-15-61. Washington, D.C.: October 16, 2014. VA Health Care: Management and Oversight of Consult Process Need Improvement to Help Ensure Veterans Receive Timely Outpatient Specialty Care. GAO-14-808. Washington, D.C.: September 30, 2014. VA Dialysis Pilot: Documentation of Plans for Concluding the Pilot Needed to Improve Transparency and Accountability. GAO-14-646. Washington, D.C.: September 2, 2014. Veterans’ Health Care: Oversight of Tissue Product Safety. GAO-14-463T. Washington, D.C.: April 2, 2014. VA Health Care: Actions Needed to Improve Administration and Oversight of Veterans’ Millennium Act Emergency Care Benefit. GAO-14-175. Washington, D.C.: March 6, 2014. Electronic Health Records: VA and DOD Need to Support Cost and Schedule Claims, Develop Interoperability Plans, and Improve Collaboration. GAO-14-302. Washington, D.C.: February 27, 2014. VA Surgical Implants: Purchase Requirements Were Not Always Followed at Selected Medical Centers and Oversight Needs Improvement. GAO-14-146. Washington, D.C.: January 13, 2014. VA Health Care: Improvements Needed in Processes Used to Address Providers’ Actions That Contribute to Adverse Events. GAO-14-55. Washington, D.C.: December 3, 2013. VA Health Care: Actions Needed to Improve Administration of the Provider Performance Pay and Award Systems. GAO-13-536. Washington, D.C.: July 24, 2013. VA Health Care: Management and Oversight of Fee Basis Care Need Improvement. GAO-13-441. Washington, D.C.: May 31, 2013. VA Health Care: Reliability of Reported Outpatient Medical Appointment Wait Times and Scheduling Oversight Need Improvement. GAO-13-130. Washington, D.C.: December 21, 2012. VA/DOD Federal Health Care Center: Costly Information Technology Delays Continue and Evaluation Plan Lacking. GAO-12-669. Washington, D.C.: June 26, 2012. VA Health Care: Weaknesses in Policies and Oversight Governing Medical Supplies and Equipment Pose Risks to Veterans’ Safety. GAO-11-391. Washington, D.C.: May 3, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VA operates one of the largest health care delivery systems in the nation, including 150 medical centers and more than 800 community-based outpatient clinics. Enrollment in the VA health care system has grown significantly, increasing from 6.8 to 8.9 million veterans between fiscal years 2002 and 2013. Over this same period, Congress has provided steady increases in VA's health care budget, increasing from $23.0 billion to $55.5 billion. Risks to the timeliness, cost-effectiveness, quality, and safety of veterans' health care, along with other persistent weaknesses GAO and others have identified in recent years, raised serious concerns about VA's management and oversight of its health care system. Based on these concerns, GAO designated VA health care a high-risk area and added it to GAO's High Risk List in 2015. Since 1990, GAO has regularly updated the list of government operations that it has identified as high risk due to their vulnerability to fraud, waste, abuse, and mismanagement or the need for transformation to address economy, efficiency, or effectiveness challenges. This statement addresses (1) the criteria for the addition to and removal from the High Risk List, (2) specific areas of concern identified in VA health care that led to its high-risk designation; and (3) actions needed to address the VA health care high-risk area. To determine which federal government programs and functions should be designated high risk, GAO considers a number of factors. For example, it assesses whether the risk involves public health or safety, service delivery, national security, national defense, economic growth, or privacy or citizens' rights, or whether the risk could result in significantly impaired service, program failure, injury or loss of life, or significantly reduced economy, efficiency, or effectiveness. There are five criteria for removal from the High Risk List: leadership commitment, capacity (people and resources needed to resolve the risk), development of an action plan, monitoring, and demonstrated progress in resolving the risk. In designating the health care system of the Department of Veterans Affairs (VA) as a high-risk area, GAO categorized its concerns about VA's ability to ensure the timeliness, cost-effectiveness, quality, and safety of veterans' health care, into five broad areas: 1. Ambiguous policies and inconsistent processes. GAO found ambiguous VA policies lead to inconsistency in the way its facilities carry out processes at the local level, which may pose risks for veterans' access to VA health care, or for the quality and safety of VA health care. 2. Inadequate oversight and accountability. GAO found weaknesses in VA's ability to hold its health care facilities accountable and ensure that identified problems are resolved in a timely and appropriate manner. 3. Information technology challenges. Of particular concern is the outdated, inefficient nature of certain systems, along with a lack of system interoperability. 4. Inadequate training for VA staff. GAO has identified gaps in VA training that could put the quality and safety of veterans' health at risk or training requirements that were particularly burdensome to complete. 5. Unclear resource needs and allocation priorities. GAO has found gaps in the availability of data required by VA to efficiently identify resource needs and to ensure that resources are effectively allocated across the VA health care system. VA has taken actions to address some of the recommendations GAO has made related to VA health care, including those related to the five broad areas of concern highlighted above; however, there are currently more than 100 that have yet to be fully resolved. For example, to ensure that processes are being carried out more consistently at the local level--such as scheduling veterans' medical appointments--VA needs to clarify its existing policies, as well as strengthen its oversight and accountability across its facilities. The Veterans Access, Choice, and Accountability Act of 2014 included a number of provisions intended to help VA address systemic weaknesses in its health care system. Effective implementation, coupled with sustained congressional attention to these issues, will help ensure that VA continues to make progress in improving the delivery of health care services to veterans. GAO plans to continue monitoring VA's efforts to improve veterans' health care. An assessment of the status of VA health care's high-risk designation will be done during GAO's next update in 2017.
Of the eleven agencies that currently have SBIR programs, two—NIH and DoD—account for the largest share of awards. Large agencies, such as NIH and DoD, generally rely on the various components within the agency such as the 23 participating institutes and centers within NIH and the 10 participating military and defense components within DoD, to help implement the SBIR program and make funding decisions. Agencies decide what type of research to fund, solicit and review applications for technical and scientific merit, verify that the applicant meets eligibility criteria, select which projects to fund, and decide the size of the award. Awards can be made to successful applicants in the form of grants, contracts, or cooperative agreements. SBA plays a key administrative and oversight role, such as issuing policy directives and annual reports for the program and monitoring agencies’ annual funding allocations. Once an award has been issued, awarding agency staff monitor the progress of work on the project. The Small Business Innovation Development Act of 1982 established a three-phased structure for the SBIR program. During phase I, participating agencies fund a proposed idea that appears to have commercial potential to more fully investigate its scientific and technical merit and feasibility. Work on the phase I project is generally not to exceed 6 months. During phase II, participating agencies fund projects to further develop the idea, generally over a 2-year period, again taking into account its commercial potential. During phase III, firms are expected to commercialize the resulting product or process using other federal or private sector funds, but with no further SBIR funding. Unlike phases I and II, phase III has no general limits in time or dollar amounts. In addition to phase I and II awards, NIH and DoD also make awards through a streamlined process—known as fast track—for projects with high commercial potential. Both agencies use an expedited review process for fast track applications. However each agency operates its program differently. For example, to qualify for DoD’s fast track awards, firms that have received phase I awards must obtain commitments for outside funding that DoD will match while NIH’s program considers information on phases I and II simultaneously and does not involve matching funds. In addition, the two agencies account for their fast track awards differently. NIH maintains data on fast track awards separately from its data on phase I and II awards. At DoD, fast track awards are included in the phase I and phase II awards data. Funding for fast track awards, which comprise a small portion of each agency’s SBIR awards, are not subject to specific dollar limits. Funds for all awards are dispersed contingent upon the awarded firm successfully achieving planned milestones. Receipt of a phase II award is contingent upon successful completion of a phase I award. Firms may receive phase II awards from the same agency that funded their phase I award or from a different participating agency. Successful commercialization during phase III may immediately follow completion of the phase II project, or may not occur for several years, or much later; drug development and medical products and processes that require extensive testing and federal regulatory approval before marketing fall into this final category. Funding for commercialization may come from the private sector or from non-SBIR federal sources. Venture capital firms that typically invest in new or existing firms with the potential for above-average growth are one source of private sector funding available to small businesses that would like to commercialize their SBIR-supported projects. Venture capital firms may seek to invest in small businesses that have received SBIR awards because, in exchange for their venture capital, they receive an ownership stake in the business and, ultimately, a share of any potential profits that result when the SBIR-supported project is commercialized. Generally, phase I and phase II awards may not exceed $100,000 and $750,000, respectively. SBA has interpreted its statutory authority as providing it with discretion to allow agencies to make awards above these guidelines, when appropriate, if they provide written justification to SBA after doing so. For example, in 2001, SBA granted NIH a waiver that allowed it to routinely make awards above the guidelines for unusually expensive research, such as medical treatment and drug research. Similarly, DoD makes awards above the guidelines on a case-by-case basis according to certain criteria, such as whether the cost was determined to be reasonable and necessary to ensure a high-quality product. To compete for SBIR awards, firms must meet size, ownership, and other eligibility criteria. For example, eligible firms must (1) be organized as for- profit firms that operate primarily within, or contribute significantly to, the U.S. economy; (2) be 51 percent or more owned by individuals who are U.S. citizens or permanent resident aliens; and (3) have, with their affiliates, no more than 500 employees. Under current law, applicants self-certify their eligibility and face potential criminal and civil penalties for misrepresenting the status of their firm in order to obtain an SBIR award. Although firms may receive multiple phase I and II SBIR awards for different projects— either from the same or from different participating agencies—firms may not receive multiple awards for work that is essentially the same. NIH and DoD follow similar procedures to select applicants and determine their eligibility for an SBIR award. (See fig. 1.) Specifically, each agency consults its various awarding components to develop research topics that further the agency’s mission, and periodically announces SBIR project opportunities through a solicitation to seek funding for research on those topics. Before the date applications are due, both agencies encourage potential applicants to contact agency staff to discuss any questions they may have, including questions about their eligibility. As of December 2005, both agencies require applicants to file electronically, and to register with the Central Contractor Register database. Submitted applications are reviewed internally for administrative completeness, and those that do not comply with the requirements may be rejected or take longer to review. At NIH, appropriately completed applications are assigned to a scientific review group and a potential funding component to undergo a two-step external peer review. The scientific review groups evaluate each application for scientific and technical merit, and commercial potential; assign scores; and subject meritorious applications to a second review. The National Advisory Council or the board of the awarding component then evaluates the applications for scientific merit, factoring in the scientific review groups’ scores and relevance to the awarding component’s mission and goals, and recommends applications to be funded. The members of the scientific review groups and the National Advisory Council or the board of the awarding component are nonfederal scientists, physicians, and engineers who are recognized authorities in their field. In contrast, at DoD, after an application has been forwarded to the appropriate awarding component, internal agency staff review the scientific and technical merit and the commercial potential of the application, and make decisions about which ones to fund. Once officials at each awarding component decide which of the meritorious applications to fund, both NIH and DoD largely rely on information that the applicant provides and certifies as accurate to determine eligibility, although each agency makes some effort to independently corroborate the self-certified information. To help conserve limited staff resources and provide a more timely determination, both NIH and DoD verify the eligibility of only those firms that submit applications deemed meritorious by their review process. In recent years, the ownership criteria have come under increased scrutiny, particularly with regard to majority ownership of SBIR awardee firms by venture capital firms. Specifically, in 2001, an SBA administrative law judge issued a decision clarifying that the terms “individuals” and “citizens” in the SBIR criteria meant only natural persons, not entities such as corporations. Subsequently, in the fall of 2002, SBA both revised its SBIR policy directive and provided additional informal clarification to participating agencies regarding the ownership criteria, but did not specifically address the role of venture capital firms or other corporations. Then in 2003, the same SBA administrative law judge issued a decision stating that venture capital firms could not be considered individuals for the purpose of satisfying the ownership criteria for the program. During fiscal years 2001 - 2004, NIH and DoD made a total of 16,019 SBIR awards valued at $5.3 billion. This section discusses the following key characteristics that we identified for these awards: (1) total number and value of the awards made, (2) geographical distribution of the awards made, (3) agency components making the awards, and (4) size of firms receiving the awards. In addition, this section provides detailed information on the key characteristics of those awards that were made to firms that had received venture capital investment. From fiscal year 2001 through 2004, NIH and DoD issued a combined total of 11,146 phase I awards, totaling about $1.3 billion, and 4,675 phase II awards, totaling about $3.8 billion. As shown in table 1, during fiscal years 2001 - 2004, DoD made twice as many awards as NIH totaling over $3.2 billion. However, NIH awards were, on average, larger than the DoD awards. We found the following at NIH: Phase I awards averaged $162,537, with a median of $100,658, and ranged from $61,750 to $1.7 million, with about 90 percent falling between $96,000 and $489,000. Phase II awards averaged $934,643, with a median of $763,719, and ranged from $150,593 to about $6.5 million, with about 90 percent falling between $542,000 and $1.8 million. Fast track awards averaged $1.1 million, with a median of $850,000, and ranged from $96,514 to $9.6 million, with about 90 percent falling between $173,000 and $2.5 million. In contrast, at DoD, we found the following: Phase I awards averaged $89,504, with a median of $99,000, and ranged from $36,595 to $449,000, with about 90 percent falling between $68,000 and $120,000. Phase II awards averaged $771,362, with a median of $747,622, and ranged from $69,997 to about $4.4 million, with about 90 percent falling between $450,000 and $1.2 million. While a firm in every state received at least one SBIR award from both NIH and DoD, a small number of states accounted for most of the awards. Specifically, about 70 percent of all SBIR awards and dollars awarded went to firms in 10 states, although the states differed between NIH and DoD. Moreover, firms in these states also submitted about 70 percent of the phase I and II applications at NIH, and phase I applications at DoD. DoD does not maintain comparable information on phase II applications by state. For example, at NIH and DoD, small businesses from California and Massachusetts submitted about a third of the applications and received about a third of the awards and a third of the dollars awarded. Tables 2 and 3 provide information on the geographical distribution of NIH’s and DoD’s total SBIR applications and awards for the top 10 states for fiscal years 2001 through 2004; detailed information on the distribution of applications, awards, and awarded dollars among all states is presented in appendix II for NIH and appendix III for DoD. At both NIH and DoD a small number of awarding components accounted for most of the SBIR awards and dollars awarded. Specifically, 4 of NIH’s 23 participating institutes and centers accounted for about half of the SBIR awards and dollars. The National Cancer Institute had the largest share at NIH, accounting for almost 20 percent of the SBIR awards and dollars. Table 4 shows the total SBIR awards and dollars associated with these four NIH awarding components. For information on the distribution of SBIR awards and dollars among all participating NIH components, see appendix II. Similarly, four of DoD’s ten awarding components accounted for more than 80 percent of the SBIR awards and dollars awarded, with the Air Force and the Army together accounting for 43 percent of awards and just over half of the dollars. Table 5 shows the total SBIR awards and dollars awarded associated with these four DoD awarding components. For more information on the distribution of SBIR awards and awarded dollars among all participating DoD components, see appendix III. Firms that had received venture capital investment received a relatively small percentage of the NIH and DoD SBIR awards, although they received a somewhat larger percentage of awards from NIH than DoD. As shown in table 6, at NIH, firms that had received venture capital investment received about 18 and 14 percent of the phase I and II awards, respectively, accounting for about 18 percent of the agency’s total SBIR dollars. As shown in table 7, at DoD about 7 percent of phase I and II awards were made to firms that had received venture capital investment. These awards accounted for about 7 percent of the dollars DoD awarded to SBIR firms. Awards to firms that had received venture capital investment were generally concentrated in the same states and in the same agency components as for all SBIR awards. At NIH, the 10 states that accounted for close to 70 percent of the total SBIR dollars also accounted for about 77 percent of the dollars to firms that had received venture capital investment. Likewise, at NIH, the four awarding components that issued about half of the SBIR dollars also accounted for 73 percent of the dollars to firms that had received venture capital investment. At DoD, the 10 states that accounted for 73 percent of the total dollars awarded to SBIR firms also accounted for about 70 percent of the dollars to firms that had received venture capital investment. In addition, the four awarding components that issued 85 percent of the dollars also issued 82 percent of the dollars to firms that had received venture capital investment. In contrast, awards to firms that had not received venture capital investment were more widely distributed, although small businesses in every state received at least one SBIR award from NIH and DoD. Specifically, we found that in 17 states, none of the firms that had received SBIR awards from NIH had received venture capital investment. For example, awardee firms in Nebraska, Vermont, and Delaware each accounted for less than 1 percent of NIH’s SBIR dollars, and none of the firms in these states had received venture capital investment. For DoD, we found that in 21 states, none of the firms that had received SBIR awards had received venture capital investment. For example, awardee firms in Nevada, Tennessee, and Rhode Island each accounted for about 1 percent or less of the dollars DoD awarded, and none of the firms in these states had received venture capital investment. Detailed information on the geographical distribution of applications, awards, and dollars awarded to firms that had received venture capital investment is presented in appendix II for NIH and appendix III for DoD. The firms that received DoD SBIR awards were, on average, relatively small-sized firms. Half of the firms had 20 or fewer employees, as reported by awardees at the time they applied for SBIR awards. DoD SBIR awardee firms that had received venture capital investment were also small-sized, on average, but about 30 percent larger than firms that had not received such investment. We could not determine the size of SBIR awardee firms for NIH, because the agency does not maintain comparable information on the number of employees at awardee firms. While about half of NIH awards and 12 percent of DoD awards were above the guidelines, less than 20 percent of such awards at NIH and about 8 percent at DoD went to firms that had received venture capital investment. However, at NIH, we found that firms that had received venture capital investment were more likely than firms that had not received such investment to receive the largest awards. A similar relationship existed for DoD’s phase I awards but not for its phase II awards. While the data, overall, indicate a relationship between firms that had received venture capital investment and high award amounts, they do not indicate whether the presence of venture capital investment was the reason the firms received an SBIR award. Awards above the guidelines were generally concentrated in the same states and made by the same awarding agency components as were awards that were within the guidelines. From fiscal years 2001 through 2004, 2,674 of the 5,061 SBIR awards made by NIH exceeded the guidelines. These awards totaled about $1.4 billion and accounted for 70 percent of NIH’s SBIR dollars. During the same time period, 1,302 of the 10,958 SBIR awards made by DoD exceeded the guidelines. These awards totaled $743 million and accounted for 23 percent of the dollars DoD awarded. Table 8 shows the total number and dollar value of the SBIR awards made by NIH and DoD during fiscal years 2001 - 2004 that were both within and above the guidelines. At NIH we found the following: Phase I awards that exceeded the guidelines averaged $223,106, with a median of $153,138, and ranged from just over $100,000 to $1.7 million, but about 90 percent were $595,000 or less. Phase II awards above the guidelines averaged $1.1 million, with a median of $924,914, and ranged from just over $750,000 to $6.5 million, but about 90 percent were $2.1 million or less. Fast track awards above the guidelines averaged $1.5 million, with a median of $1,151,979, and ranged from just over $850,000 to $9.6 million, but about 90 percent were $3.4 million or less. Although NIH does not retain data centrally on the reason individual projects exceeded the SBA guidelines, it generally makes such awards for research in specific fields, such as biotechnology, that is relatively costly to conduct. In 2001, SBA granted NIH a waiver that allows the agency to issue awards that exceed the guidelines without reporting data on individual projects. In the waiver, SBA recognized that NIH routinely issues awards for particularly expensive technologies. During the review process, external peer reviewers also assess the reasonableness of cost estimates before agency officials make awards above the guidelines. At DoD, we found the following: Phase I awards that exceeded the guidelines averaged $122,312, with a median of $119,911, and ranged from just over $100,000 to $448,796, but about 90 percent were $142,000 or less. Phase II awards above the guidelines averaged $1.2 million, with a median of $1 million, and ranged from just over $750,000 to $4.4 million, but about 90 percent were $2.0 million or less. At DoD, the two main reasons awards exceeded the guidelines were that (1) extra funds were needed to ensure a high-quality investigation of a proposed idea and (2) the amount above the guidelines included non-SBIR program funds. These non-SBIR funds, called mission funds, are not included in DoD’s calculation of its 2.5 percent obligation to fund the SBIR program. For example, a DoD component, such as the Army, Navy, or Air Force, may add mission funds to a SBIR project if it decides that the project may help support its programmatic goals. Moreover, it is DoD’s policy to encourage its awarding components to match these and other non-SBIR dollars with SBIR dollars. At DoD, about half of phase II awards above the guidelines have received mission funds from awarding components. At both NIH and DoD, most of the awards that exceeded the guidelines went to firms that had not received venture capital investment. As shown in table 9, only 18 percent of awards above the guidelines at NIH and only about 8 percent of awards above the guidelines at DoD went to firms that had received venture capital investment. While firms that had received venture capital investment received a small share of the total SBIR awards, these firms were generally more likely than firms that had not received such investment to receive the largest awards from NIH. Specifically, firms that had received venture capital were twice as likely as firms that had not received such investment to receive phase II awards from NIH greater than $1 million, which accounted for about 37 percent of NIH’s phase II dollars; these firms were about six times more likely to receive phase II awards greater than $2.5 million, which accounted for about 7 percent of the phase II dollars. Similarly, for DoD phase I awards, we found that firms that had received venture capital investment were more likely to receive awards above the guidelines than firms that had not received such investment. However, this relationship did not hold true for DoD’s phase II awards. We found that phase II awardee firms that had received venture capital were less likely to receive large awards than firms that had not received such investment. While the data, overall, indicate a relationship between firms that received venture capital investment and high award amounts, by themselves the data do not indicate whether the presence of venture capital was the reason these firms received such large awards. Many factors can influence the willingness of venture capital firms to invest funds in small businesses. NIH and DoD awards that exceeded SBA’s guidelines were generally concentrated in the same states and in the same agency components as were all SBIR awards. At NIH, the 10 states that received close to 70 percent of the total SBIR dollars also received 72 percent of the dollars for awards above the guidelines. Likewise, at NIH, the four components that awarded about half of the SBIR program dollars accounted for 54 percent of the awards above the guidelines. At DoD, the 10 states that received 73 percent of the total dollars awarded also received about 74 percent of the awards above the guidelines. In addition, the four DoD agency components that awarded 85 percent of the dollars also awarded about 89 percent of the awards above the guidelines. Detailed information on the geographical and agency component distribution for awards that were above the guidelines, as well as awards that were within the guidelines, and their associated dollars is presented in appendix II for NIH and appendix III for DoD. During fiscal years 2001 - 2004, participation in NIH’s and DoD’s SBIR programs substantially increased for both types of firms—those that had, and those that had not, received venture capital investment. However, we did observe the following differences in the awards made by NIH and DoD over the 4-year period including the 2 years before and the 2 years after SBA provided additional clarification of the ownership criteria in October 2002: Overall participation in the SBIR program by firms that had received, and that had not received, venture capital investment increased. However, firms that had received venture capital investment were added to the program at a significantly higher rate than those that had not. For example, at NIH, over the 4-year period, we noted that participation by firms that had received venture capital investment grew at a rate of 42 percent, compared to 19 percent for those that had not received this type of investment. The total number of awards made by NIH and DoD to both types of firms in the 2 years after the clarification was greater than the number of awards made before the clarification. However, the rate at which awards were made to firms that had received venture capital investment was significantly greater than the rate at which awards were made to firms that had not received venture capital investment. For example, over the 4-year period, the number of awards that DoD made to firms that had received venture capital investment increased by 167 percent, compared to a 65 percent increase in awards to firms that had not. More specifically, the number of awards to firms that had received venture capital increased from 270 in the 2 years before the clarification to 477 in the 2 years after. Whereas, awards to firms that had not received venture capital increased from 4,154 in the 2 years before the clarification to 6,057 in the 2 years after. For firms that had and had not received venture capital investment, the average dollar value of awards generally increased at NIH, but decreased or remained about the same at DoD over this 4-year period. For example, at NIH, although both types of firms received increasingly larger phase II awards between fiscal years 2001 and 2004, the awards to firms that had received venture capital investment were substantially larger than those to firms that had not received this kind of investment. Specifically, in the 2 years before the clarification, phase II awards to firms that had received venture capital investment averaged about $885,000 compared to over $1.3 million in the 2 years following, while awards to firms that had not received venture capital averaged less than $860,000 before the clarification compared to less than $950,000 after the clarification. The number of awards that were above the guidelines increased significantly at NIH after the clarification but decreased at DoD. For example, from 2001 to 2004, the number of awards above the guidelines increased by 45 percent at NIH from about 600, on average, in the 2 years before the clarification to about 730, on average, in the 2 years after. In contrast, at DoD, these awards decreased by 26 percent, from about 351 on average, in the 2 years before the clarification to about 300, on average, in the 2 years after the clarification. Firms that had received venture capital investment received an increasing share of total dollars awarded at both NIH and DoD although the change at DoD was significantly less. For example, at NIH, in the 2 years prior to the clarification, firms that had received venture capital investment were awarded about 14 percent of total SBIR program funds, and in the 2 years following the clarification this share had increased to about 22 percent. At DoD, firms that had received venture capital investment received 6 percent, on average, of the dollars awarded in the 2 years before the clarification and 7 percent, on average, in the 2 years following the clarification. At both NIH and DoD, 10 states accounted for the majority of dollars awarded both before and after the clarification. However, the concentration of NIH’s dollars awarded to firms that had received venture capital investment increased somewhat in the 10 states following the clarification, while the concentration of DoD’s dollars awarded to such firms in the 10 states decreased somewhat. Four agency components generally accounted for the majority of dollars awarded and dollars awarded to firms that had received venture capital investment both before and after the clarification. At both NIH and DoD, the concentration of the dollars awarded by the four agency components to firms that had received venture capital investment increased following the clarification but to a lesser extent at DoD. Finally, the number of applications received by both NIH and DoD continued to increase following the SBA clarification. Both agencies experienced a significant growth in the number of applications received in the 2 years after the clarification compared to the 2 years preceding the clarification. For example, at DoD, the total number of applications in fiscal years 2001 and 2002 was 22,139 and the total number of applications received in fiscal years 2003 and 2004 was 33,922. According to officials responsible for the SBIR program at NIH and DoD, the quality of the applications for SBIR awards increased or remained the same during the 4-year period. However, because the number of applications received increased at a faster rate than the agencies' SBIR budgets, the percentage of the applications funded by NIH and DoD has generally decreased. Tables 10 through 22 provide detailed data for NIH’s and DoD’s SBIR awards broken out by fiscal year from 2001 to 2004. NIH, DoD and SBA focus primarily on criteria relating to ownership, for- profit status, and the number of employees to determine a firm’s eligibility for the SBIR program and take steps to verify eligibility information provided by applicants. When NIH or DoD officials are unable to ensure the accuracy of an applicant’s information, they refer the matter to SBA. After SBA makes an eligibility determination, it makes information about the firms it finds ineligible available on its Web site, but does not always indicate that the determination was for SBIR purposes. NIH makes information on ineligible firms centrally available to participating agency components while DoD does not. Each agency limits its data collection efforts largely to information about the SBIR award itself, such as award size and location of the principal investigator, and does not collect information on certain characteristics of the firms receiving the awards, such as the presence of venture capital investment. Officials at NIH, DoD, and SBA told us that they focus largely on three SBIR criteria in their eligibility reviews—ownership, size in terms of the number of employees, and for-profit status of SBIR applicants. However, they also stated that they consider information on the full range of criteria, such as whether the principal investigator is employed primarily by the applying firm, and the extent to which work on the project will be performed by others. Both NIH and DoD rely on applicants to self-certify that they meet all of the SBIR program’s eligibility requirements as part of their SBIR applications. At NIH, applicants certify they meet the eligibility criteria by completing a verification statement when NIH notifies them that their application has been selected for funding but prior to NIH making the award. The verification statement requires applicants to respond to a series of questions to certify they meet SBA’s eligibility criteria relating to for- profit status, ownership, number of employees, where the work will be performed, and the primary employment of the principal investigator, among others. NIH also refers applicants to its notice of SBIR funding opportunities, known as a solicitation, for more detail on SBA’s eligibility requirements. NIH will not issue an award until it receives and accepts the applicant’s responses to the verification statement. At DoD, the cover sheet for each SBIR application requires applicants to certify they meet SBA’s eligibility criteria. The cover sheet also refers applicants to DoD’s solicitation that details the requirements. As with NIH, DoD will not fund applications if the questions on the cover sheet are not answered. Both NIH and DoD warn applicants of the civil and criminal penalties for making false, fictitious, or fraudulent statements. In addition to the eligibility criteria provided in the agency’s solicitations, NIH and DoD support periodic conferences during which potential applicants can learn about SBIR eligibility criteria, and both agencies post eligibility requirements on their Web sites. Although agencies rely largely on applicants to self-certify that they meet the SBIR program’s eligibility criteria, both NIH and DoD make additional efforts to ensure the accuracy of the information provided by applicants prior to making an award. At NIH, officials in participating institutes and centers conduct Web searches, check press releases, and may request documentation from applicants to verify information on their eligibility status, including, but not limited to, information on ownership, size, principal investigators, board members, location, e-mail addresses, and affiliations with other firms. In addition, NIH officials search published information on such features as venture capital investments, and whether the applicant has been purchased by another company or changed its name, as well as internal information on whether NIH has received other grant applications from the applicant. NIH officials review this information, among other things, to identify potential concerns regarding the applicant’s eligibility status. For example, an “.edu” e-mail extension could indicate that the principlal investigator is primarily employed by a university rather than by the applicant, as required by SBIR criteria. If no concerns arise, NIH deems the company eligible. However, when concerns cannot be resolved, NIH officials contact the applicant to ask a standard list of questions. Currently, the list of questions varies by institute and center, but according to NIH officials, the agency is in the process of creating a uniform list of eligibility questions to be used throughout NIH. NIH officials also told us that it has incorporated eligibility training into the curriculum for its SBIR staff. At DoD, prior to making an award, officials check the information applicants provided in the application cover sheet for consistency with the eligibility information applicants entered into the Central Contractor Registration database that is required for all applications, and the Online Representations and Certifications Application system that is required for phase II applications. According to DoD officials, most discrepancies between information on the cover sheets and in the databases occur because the applicant was purchased by another company, the e-mail addresses have changed, or the principal investigator appears to be a full- time employee of an educational institution. To resolve discrepancies, DoD officials may choose to contact the applicant, or search on the Web for information. Typically, officials told us, DoD awarding components work with firms to answer their eligibility questions and help ensure that they prepare the necessary documentation properly. In addition, DoD encourages self-policing by applicants by posting the names of recent SBIR awardees on its Web site so that competitors can raise any eligibility concerns they may have. However, Office of Naval Research officials said competing firms seldom raise concerns, usually only one per year, and that typically the concern is related to whether the awardee firm has been purchased by another company. When officials at either NIH or DoD have unresolved concerns about the accuracy of an applicant’s eligibility information, they refer the matter to SBA to make an eligibility determination. Upon receipt of a letter from the agency detailing its concerns, SBA officials contact the applicant, ask them to re-certify their eligibility status, and may request additional documentation on the criteria of concern. For example, SBA officials may request articles of incorporation and information on the distribution of ownership to determine whether the applicant was at least 51 percent owned by qualified individuals. Upon making a determination of eligibility, SBA then notifies the official at the inquiring agency, and the applicant, of its decision. Further, SBA makes the information about firms it finds ineligible publicly available on its Web site so that all participating agencies and the public can access the information. However, SBA does not currently require its eligibility officials to include information on the Web site identifying whether or not the determination was for SBIR purposes. An SBA official told us the agency plans to include such information on its Web site more systematically before the end of fiscal year 2006. NIH and DoD take different approaches to retaining and sharing information on firms found ineligible by SBA. At NIH, when SBA notifies the referring agency official that it has deemed an SBIR applicant as ineligible, the official notes the determination in the applicant’s file. However, according to NIH officials, in response to the May 2003 SBA decision, NIH also began centrally tracking firms SBA found ineligible, as well as firms that self-selected themselves as ineligible and withdrew their applications at some point in the review process. The agency makes this information available to all of its institutes and centers that make SBIR awards. In addition, agency officials told us that, if during the eligibility review process, a firm was determined to be ineligible for an SBIR award, NIH advises the applicant to be recertified by SBA before applying for additional awards. In contrast, DoD retains information on firms determined to be ineligible in the applicants’ files and does not have a centralized process to share the information across DoD awarding components. However, DoD officials said it is common practice for awarding components to share such information electronically. For the most part, NIH and DoD limit their data collection efforts largely to information about the SBIR awards they make. Key information the agencies track includes the phase, date, and amount of the award; the geographic location of the awardee firm and principal investigator; and contact information. Currently, the agencies do not maintain detailed data on (1) applicants that the agencies decided not to select for funding, (2) the reasons applicants were not selected for funding, and (3) characteristics of the firms receiving the awards, such as the presence of venture capital investment or the extent of ownership by venture capital firms or otherentities. Moreover, neither NIH nor DoD systematically categorizes SBIR projects by the industry represented or by the specific type of research, such as whether the research is for a process or product or whether the research is for software or therapeutic devices. However, both agencies categorize projects by the general research topics listed in each agency’s solicitation. We provided NIH, DoD, and SBA with a draft of this report for their review and comment. NIH only provided technical comments that we have incorporated, as appropriate. DoD agreed that because data on ownership are not publicly available, it is not possible to determine the extent to which venture capital firms own SBIR awardee firms. Moreover, DoD did not find the results of our analysis surprising in light of differences in markets for SBIR projects supported by NIH and DoD. SBA noted in its comments that while the information in the report may be useful, it could be misconstrued as suggesting a link between the presence of venture capital investment and SBIR ownership criteria when such a link does not exist. While we understand SBA’s concern, we believe that our report clearly states that we used venture capital investment as a proxy for venture capital ownership because ownership data are proprietary and confidential or not readily available. We also explicitly note in the report that no causal link can be inferred from the data. Both DoD and SBA also provided technical comments that we incorporated, as appropriate. The comments from DoD and SBA are included in this report as appendixes IV and V, respectively. We are sending copies of this report to the Director of the National Institutes of Health, the Secretary of Defense, the Administrator of the Small Business Administration, and other interested parties. We will make copies available to others upon request. In addition, this report will be available, at no charge, on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. In conducting our work, we interviewed officials at the National Institutes of Health (NIH), the Department of Defense (DoD), and the Small Business Administration (SBA) about their procedures in place, and any changes that occurred during the time period of our review—fiscal years 2001 through 2004. We reviewed agency documentation on awards, award selection and funding, eligibility determinations, and the data elements that are collected during the eligibility process. We also interviewed officials from organizations that represent venture capital investors and biotechnology firms to obtain their views on the Small Business Innovation Research (SBIR) program and venture capital investment. To determine the total number and key characteristics of SBIR awards, we obtained data from NIH and DoD for all awards made during fiscal years 2001 - 2004. Specifically, we asked for funding data on all SBIR awards that originated in fiscal years 2001 through 2004 and excluded data on funding to awards that originated before and after our time frame. These data included award amounts projected at the time the award was made and supplemental amounts issued subsequent to the award. We compared these data to data on funds actually issued to awardee firms to ensure that the projections were reasonable. We based our analysis on the projected amounts combined with supplemental funds, which allowed us to provide information on awards that originated in fiscal years 2003 and 2004 but which may not be completed until 2005 or later. However, for NIH fast track awards, we based our analysis on the actual dollars NIH spent on the award as of September 30, 2005. We did not analyze awards NIH made as contracts, which were a small portion of NIH’s awards, because certain data elements essential for our analysis were not available. We interviewed key officials at NIH and DoD about their databases. We assessed the reliability of relevant fields in the agencies’ databases and found them sufficient for our review. Our assessment included tests of the data itself as well as reviews of internal quality control procedures. We used analytic software to identify the following key characteristics of awards made by NIH and DoD: the number of firms receiving these awards; geographic location of these firms; agency components making the awards; the amounts of the awards; and for DoD, the number of employees working for the SBIR firms and their affiliates. We also obtained data from two private sector firms, the Innovation Development Institute (IDI) and Dow Jones-Venture One. Initially, we selected IDI because its database included information that supplemented agency data on SBIR awardee firms. IDI’s database included information on venture capital investments in SBIR awardee firms that was compiled from published information such as company press releases. Subsequently, we obtained data from Dow Jones-Venture One to help corroborate, and supplement as necessary, the IDI data. We selected Dow Jones-Venture One because its database on venture capital investment is compiled and updated with information from both venture capital investors and the firms that receive the investment. We interviewed key officials at IDI and Dow Jones-Venture One about their databases. We assessed the reliability of relevant fields in the databases and found them sufficient for our review. Our assessment included tests of the data itself as well as reviews of internal quality control procedures. We combined data from NIH and DoD with data from the private sector sources to identify firms that had received venture capital investment at any time before they received an SBIR award in our time frame. In a small number of cases, the venture capital investment occurred as early as 1980, although most occurred in or after 1990. Specifically, at both agencies combined, 93 percent of SBIR firms first received venture capital investment between 1990 and 2004, 6 percent first received investment between 1985 and 1989, and 1 percent first received investment earlier than 1985. The private sector data did not include information on whether the venture capital investment was still present in the firm at the time of the SBIR award, or whether the venture capital firms were majority owners. We used analytic software to determine the number of awards made, the amounts of the awards, the percentage of applications receiving awards, and how the awards were distributed geographically. Similarly, to identify the number of phase I and phase II awards that were above the guidelines and their key characteristics, we used analytic software on the agency data and the combined data to identify firms that had received venture capital investment by the time they received an SBIR award above the guidelines in our time frame. In our analyses, we express differences in the likelihoods of receiving awards that exceeded SBA’s guidelines by using odds ratios. An odds ratio is generally defined as the ratio of odds of an event occurring in one group compared to the odds of it occurring in another group. In our analyses, the event of interest was receiving an award that exceeded SBA’s guidelines versus one that did not. We computed odds ratios to indicate the difference between firms that had and firms that had not received venture capital investment in the likelihood of receiving awards that (1) exceeded SBA’s guidelines at all, and (2) exceeded the guidelines by large amounts. For example, the odds that a firm that had received venture capital investment received an NIH phase II award above $1 million were .499 while the odds that a firm that had not received venture capital investment received such an award were .244. Therefore, the ratio of those two odds was 2.045 (.499 / .244), indicating that firms that had received venture capital investment were 2.045 times (about 2 times) more likely to receive phase II awards over $1 million than firms that had not received such investment. To assess any differences that occurred to awards both above and below the guidelines following SBA’s October 2002 additional clarification of the SBIR ownership criteria and its May 2003 decision that addressed ownership of SBIR firms by venture capital firms, we compared NIH and DoD data on awards made from October 1, 2000, through September 30, 2002, with data on awards made from October 1, 2002, through September 30, 2004. We used the combined data set to compare NIH and DoD awards from the two time periods in terms of the key characteristics described above. We conducted our work in accordance with generally accepted government auditing standards. In addition to the individual named above, Cheryl Williams (Assistant Director), Stephen Cleary, Curtis Groves, Annamarie Warman Lopata, Gregory Marchand, Marcus Oliver, Alison O’Neill, G. Gregory Peterson, and Anne Rhodes-Kline made key contributions to this report.
The Small Business Innovation Research (SBIR) program is a three phase program that increases the use of small businesses to meet federal research needs and encourages commercialization of this research. Venture capital is one source of funding to help commercialize SBIR projects. To receive an award firms must meet ownership and other criteria and awards may exceed dollar guidelines. In 2002, the Small Business Administration (SBA) clarified that majority owners of firms that receive awards must be individuals rather than corporations. Since 2002, controversy has arisen over the extent to which venture capital firms may own SBIR firms. GAO was asked to provide information on SBIR for fiscal years 2001 - 2004. For NIH and DOD, we determined the (1) number and characteristics of awards, (2) number and characteristics of awards above the guidelines, (3) changes in award characteristics after 2002, and (4) factors agencies consider, and data they collect on, SBIR awards. NIH, DOD, and SBA provided technical comments that were incorporated, as appropriate. DOD said our findings were not surprising in light of differences in the markets for SBIR projects. SBA said our findings, though useful, may be misconstrued as suggesting a link between venture capital investment and SBIR eligibility, when no such link exists. During fiscal years 2001-2004, the National Institutes of Health (NIH) and Department of Defense (DOD) made 16,019 SBIR awards valued at $5.3 billion. GAO identified the following characteristics of these awards: (1) most were concentrated in a few states; (2) a few agency component made most of the awards; (3) award amounts ranged from well below the guidelines to significantly above; (4) few awards were made to firms that had received venture capital investment, NIH made more than DOD; and (5) firms that received DOD SBIR awards were relatively small-sized. Overall, from fiscal year 2001 through 2004, about half of NIH awards and 12 percent of DOD awards exceeded the guidelines, and most went to firms that had not received venture capital investment. Awards above the guidelines accounted for 70 percent of NIH's SBIR dollars and 23 percent of DOD's. Agency officials said NIH and DOD made such awards generally to fund relatively expensive research or to ensure high-quality results. Awards above the guidelines to firms that had received venture capital investment accounted for 18 percent of NIH's awards above the guidelines, and about 8 percent of DOD's. At NIH, firms that had received venture capital investment were more likely to receive the largest awards than firms that had not. A similar relationship existed for DOD's phase I awards but not for its phase II awards. Since 2002, when SBA clarified SBIR ownership eligibility criteria, an increasing number of awards have been made to small business firms that had received venture capital investment; such firms have generally received larger awards at NIH and, in the aggregate, a larger share of NIH's and DOD's available SBIR funds. In addition, the average phase II award amount to firms that had received venture capital investment increased by over 70 percent, from about $860,000 in fiscal year 2001 to about $1.5 million in fiscal year 2004. As a result, such firms attracted a greater percentage of NIH's total SBIR dollars each year--about 21 percent on average in fiscal years 2003 and 2004, compared to an average of about 14 percent in fiscal years 2001 and 2002. At DOD we found similar trends, but to a somewhat lesser extent. NIH, DOD, and SBA focus mainly on SBIR eligibility criteria relating to ownership, for-profit status, and the number of employees when reviewing applications. Although applicants self-certify that they meet these criteria, both NIH and DOD make efforts to verify the accuracy of the information prior to making an award. When agency officials are unable to verify the accuracy of an applicant's information, they refer the matter to SBA. Both agencies limit their data collection efforts largely to information about the SBIR award itself, such as award size. Agencies are not required to gather data on characteristics of the firms receiving the awards, such as the presence of venture capital investment; as a result, this information is currently not being collected.
The President’s Budget for Fiscal Year 2006 included 1,087 IT projects, totaling about $65 billion. The planned expenditures at the civilian agencies comprised about $35 billion of that total cost. In particular, the five departments in our review made up about one-third of the civilian planned expenditures (see fig. 1). OMB plays a key role in overseeing these IT investments and how they are managed, stemming from its predominant mission: to assist the President in overseeing the preparation of the federal budget and to supervise budget administration in executive branch agencies. In helping to formulate the President’s spending plans, OMB is responsible for evaluating the effectiveness of agency programs, policies, and procedures; assessing competing funding demands among agencies; and setting funding priorities. To carry out these responsibilities, OMB depends on agencies to collect and report accurate and complete information; these activities depend in turn on agencies having effective IT management practices. To drive improvement in the implementation and management of IT projects, the Congress enacted the Clinger-Cohen Act in 1996, which expanded the responsibilities of OMB and the agencies that had been set under the Paperwork Reduction Act. The Clinger-Cohen Act requires that agencies engage in capital planning and performance- and results-based management. The act also requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. OMB is also required to report to the Congress on the net program performance benefits achieved as a result of major capital investments in information systems that are made by executive agencies. With regard to OMB’s responsibilities in this area, we recently issued a report that provided recommendations to improve OMB’s processes for monitoring high-risk IT investments. Since that report was released, OMB has issued additional guidance outlining steps that agencies must take for all high-risk projects to better ensure improved execution and performance as well as promote more effective oversight. In response to the Clinger-Cohen Act and other statutes, OMB developed policy for planning, budgeting, acquisition, and management of federal capital assets. This policy is set forth in OMB Circular A-11 (section 300) and in OMB’s Capital Programming Guide (supplement to Part 7 of Circular A-11), which directs agencies to develop, implement, and use a capital programming process to build their capital asset portfolios. Among other things, OMB’s Capital Programming Guide directs agencies to evaluate and select capital asset investments that will support core mission functions that must be performed by the federal government and demonstrate projected returns on investment that are clearly equal to or better than alternative uses of available public resources; institute performance measures and management processes that monitor actual performance and compare to planned results; and establish oversight mechanisms that require periodic review of operational capital assets to determine how mission requirements might have changed and whether the asset continues to fulfill mission requirements and deliver intended benefits to the agency and customers. Among OMB’s goals in requiring the use of a capital programming process is to assist agencies in complying with a number of results-oriented requirements. Key requirements include those set by the Federal Acquisition Streamlining Act of 1994, which (1) requires agencies to establish cost, schedule, and measurable performance goals for all major acquisition programs and (2) establishes that agencies should achieve on average 90 percent of those goals; the Government Performance and Results Act of 1993, which establishes the foundation for budget decision making to achieve strategic goals in order to meet agency mission objectives; and the Federal Information Security Management Act of 2002, which requires agencies to integrate IT security into their strategic and operational planning processes, such as the capital planning and enterprise architecture processes at the agency. OMB is aided in its responsibilities by the Chief Information Officers (CIO) Council as described by the E-Government Act of 2002. The council is designated the principal interagency forum for improving agency practices related to the design, acquisition, development, modernization, use, operation, sharing, and performance of federal government information resources. Among the specific functions of the CIO Council are the development of recommendations for the Director of OMB on government information resources management policies and requirements and the sharing of experiences, ideas, best practices, and innovative approaches related to information resources management. The CIO Council has issued several guides on capital planning and investment management over the past several years. To further support the implementation of IT capital planning practices, we have developed an IT investment management (ITIM) framework that agencies can use in developing a stable and effective capital planning process, as required by statute and directed in OMB’s Capital Programming Guide. Consistent with the statutory focus on selecting, controlling, and evaluating investments, this framework focuses on these processes in relation to IT investments specifically. It is a tool that can be used to determine both the status of an agency’s current IT investment management capabilities and the additional steps that are needed to establish more effective processes. Mature and effective management of IT investments can vastly improve government performance and accountability. Without good management, such investments can result in wasteful spending and lost opportunities for improving delivery of services to the public. The ITIM framework lays out a coherent collection of key practices that, when implemented in a coordinated manner, can lead an agency through a robust set of analyses and decision points that support effective IT investment management. The framework explicitly calls for consideration of cost, schedule, benefit, and risk objectives, including the development of analyses such as return on investment and a risk management plan. The framework also describes the criticality of tracking progress using valid and complete data. The guidance laid out in the ITIM framework is consistent with the requirements of OMB’s Circular A-11 and matches it in many instances. For example, among the requirements on the exhibit 300 is that agencies indicate that the investment has been reviewed and approved by the responsible oversight entity. The agency investment review board is a critical element of the ITIM framework, and the expectation for the board to select and oversee IT investments is explicit. In previous work using our IT investment management framework, we reported that the use of IT investment management practices by agencies was mixed. For example, a few agencies that have followed the ITIM framework in implementing capital planning processes have made significant improvements. In contrast, however, we and others have continued to identify weaknesses at agencies in many areas, including immature management processes to support both the selection and oversight of major IT investments and the measurement of actual versus expected performance in meeting established IT performance measures. For example: We recently reported that the HHS senior investment board does not regularly review component agencies’ IT investments, leaving close to 90 percent of its discretionary investments without an appropriate level of executive oversight. To remedy this weakness, we recommended that the department (1) establish a process for the investment board to regularly review and track the performance of a defined set of component agency IT systems against expectations and (2) take corrective actions when these expectations are not being met. At DHS, we determined that the department’s draft information resources management strategic plan did not include fully defined goals and performance measures. To address this weakness, we recommended that the department establish IT goals and performance measures that, at a minimum, address how information and technology management contributes to program productivity, the efficiency and effectiveness of agency operations, and service to the public. A recent review by the DOD Inspector General determined that over 90 percent of the business cases submitted to OMB in support of the DOD fiscal year 2006 budget request did not completely respond to one or more data elements addressing security funding, certification and accreditation, training and security plans, and enterprise architecture. The DOD Inspector General concluded that, as a result, these submissions continued to have limited value and did not demonstrate that the department was effectively managing its proposed IT investments for fiscal year 2006. Besides providing policy for planning, budgeting, acquisition, and management of federal capital assets, section 300 of OMB’s Circular A-11 instructs agencies on budget justification and reporting requirements for major IT investments. Section 300 defines the budget exhibit 300, also called the Capital Asset Plan and Business Case, as a document that agencies submit to OMB to justify resource requests for major IT investments. According to OMB, only priority capital asset investments that comply with the policies for good capital programming, as described in the Capital Programming Guide, will be recommended for funding in the President’s Budget. The exhibit 300 was established as a source of information on which to base both quantitative decisions about budgetary resources consistent with the Administration’s program priorities and qualitative assessments about whether the agency’s planning, acquisition, management, and use of capital assets (investments) are consistent with OMB policy and guidance. The types of information included in the exhibit 300 are intended, among other things, to help OMB and the agencies identify and correct poorly planned or performing investments (i.e., investments that are behind schedule, over budget, or not delivering expected results) and real or potential systemic weaknesses in federal information resource management (such as a shortage of sufficiently qualified project managers). According to Circular A-11, the information in the exhibit 300 allows the agency and OMB to review and evaluate each agency’s IT spending and to compare IT spending across the federal government. Further, the information helps the agency and OMB to provide a full and accurate accounting of IT investments for the agency, as required by the Paperwork Reduction Act and the Clinger-Cohen Act. The exhibit 300 is required for all assets, though certain sections apply only to information technology. Table 1 provides a description of the key sections of the exhibit 300, as well as examples of the types of documentation that provide support for the data summarized in the exhibit 300 (although the supporting documentation may take other forms). This support may be derived from a variety of sources, including financial management systems and management processes that agencies carry out to comply with federal requirements and guidelines (such as the Federal Information Security Management Act of 2002 and the Federal Enterprise Architecture), as well as from analyses carried out specifically in support of the management of the investment. According to OMB guidance, the life-cycle stage of the asset affects what is reported on the exhibit 300: New investments (i.e., proposed for budget year or later, or in development) must be justified based on the need to fill a gap in the agency’s ability to meet strategic goals and objectives with the lowest life-cycle costs of all possible alternatives and provide risk-adjusted cost and schedule goals and measurable performance benefits. Mixed life-cycle investments (i.e., investments that are operational but include some developmental effort, such as a technology refresh) must demonstrate satisfactory progress toward achieving baseline cost, schedule, and performance goals using an EVM system. Operational investments (i.e., steady state) must demonstrate, among other things, how close actual annual operating and maintenance costs are to the original life-cycle cost estimates; whether the technical merits of the investment continue to meet the needs of the agency and customers; and that an analysis of alternatives was performed with a future focus. OMB requires agencies to transmit exhibit 300s electronically, using a predefined format. To meet this requirement and facilitate the aggregation of elements of the exhibits from various sources throughout the organization, many agencies use software applications to compile their exhibits 300s. Besides aggregating portions of the exhibit 300, these tools are designed to also perform certain calculations, such as return on investment and those required for earned value analysis. Although the agencies reported that all 29 exhibit 300s had been approved by their investment review boards (as required), in many instances, support for the information provided was not adequate. (Details on the 29 investment projects described in the exhibit 300s that we reviewed are provided in app. III.) Three types of problems were evident. First, all exhibit 300s had documentation weaknesses. For example, each investment lacked documentary support for one or more of the following: Analysis of Alternatives, Risk Inventory and Assessment, and Performance Measures and Goals. In other cases, the supporting material that was provided to us did not match the information in the exhibit 300. Second, agencies did not always demonstrate (for example, in the Security and Privacy and the Project and Funding Plan sections) that they complied with federal requirements or policies with regard to management and reporting processes. Finally, information in some sections (such as the Summary of Spending table and the Project and Funding Plan) could not be relied upon because the numbers were not derived using repeatable processes or reliable systems. Agency officials attributed the absence of adequate support for their exhibit 300s to lack of understanding of the requirements or of how to respond to them. Agency officials mentioned in particular insufficient guidance or training, as well as lack of familiarity with particular requirements, such as the EVM process. If underlying support is inadequate in key areas, OMB and agency executives are depending on unreliable information to monitor the management of major IT projects and to make critical decisions on their funding, thus putting at risk millions of dollars in investments. OMB Circular A-11 states that agencies must justify funding requests for major acquisitions by demonstrating, among other things, measurable performance benefits, comprehensive risk mitigation and management planning, and positive return on investment for the planned investment. Agencies are instructed to establish performance metrics (including baselines from which progress can be measured) to ensure that project managers are accountable in meeting expected performance goals and that projects are aligned with the agencies’ strategic goals. Agencies are also expected to manage investment risk through a robust risk management program; according to OMB’s guidance, agencies need to actively manage risks from initial concept throughout the life cycle of each investment. To demonstrate a positive return on investment for the selected alternative and identify a project’s total lifetime cost, OMB requires agencies to compare alternatives and report summary cost information for investments (including calculations for payback period and net present value). Documents produced in the performance of these activities provide evidence that they were carried out as required. Performance measures. The investments did not usually demonstrate the basis for the performance measure information provided in the exhibit 300. Only 6 of the 29 investments had documentation to support how agencies initially measured their baseline levels of performance, from which they measured progress toward the agency’s strategic goals. In most cases, the investments lacked documentation describing the levels of performance that had been achieved or how these results actually helped meet agency strategic needs. The absence of documentation in these cases could indicate a systemic weakness in agency performance management practices, since well-developed practices should provide the expected support. This finding is consistent with our prior work where we determined that agencies were generally not measuring actual versus expected performance in meeting IT performance goals. Weak performance management practices reduce the ability of agency executives to track investment performance in meeting performance objectives and raise the risk that investments will not be well aligned with agency strategic objectives. Risk management. About 75 percent of the investments were unable to demonstrate that they were actively addressing the risk elements that OMB specifies in Circular A-11, or how they had determined that any of those risks were not applicable. In addition, documentation of risk management that was provided had significant weaknesses. In one case, a risk management plan was approximately 9 years old and had not been updated, and for three investments, the risk documentation addressed only the project development phase, even though the systems had exited that phase and were in full operation. Analysis of alternatives. All 29 investments reported cost information in the analysis of alternatives section of the exhibit 300. However, in about 72 percent of the exhibit 300s reviewed, either supporting documentation was missing for this cost information, or information in the documentation did not agree with that in the exhibit 300. In cases where investments lacked documentation to support information reported in the performance and risk areas, project officials frequently told us that they had filled out these sections of the exhibit 300 to satisfy the reporting requirement, relying on their own knowledge of the investment rather than any project documentation. However, such an approach is not consistent with the requirement for providing accurate information in compliance with OMB capital programming and capital planning and investment control policies. In addition, several project officials told us that they believed some of the 19 risk management areas required in the exhibit 300 were not applicable to their investment, but they reported on those categories nonetheless to fulfill the requirement. Although the guidance instructs agencies to indicate whether the risk category was not applicable, officials stated that their impression is that “not applicable” responses might lower the evaluation of their investments and reduce or eliminate their funding. Further, agency officials generally responded that the training they received for preparing the exhibit 300 was not sufficient. For example, one agency commented that agencies would benefit from targeted OMB training that would address agency-specific questions. Several agencies stressed that OMB training should occur earlier in the budget cycle. In addition, one agency said that it needed OMB training on preparing each section of the exhibit 300. Overall, the lack of documentation supporting the exhibit 300s raises questions regarding the sufficiency of the business case for the investment and the quality of the projects’ management. Compliance with OMB and other federal guidance and related federal laws helps ensure that agency investments are managed in a manner consistent with the intent of the Congress and that key information is available to OMB and agency managers on which they can base informed decisions. The security section of the exhibit 300 requires that agencies demonstrate that they have developed information security plans in accordance with the Federal Information Security Management Act of 2002 (FISMA); according to FISMA, these plans must include rules of behavior for system use, technical security controls, and procedures for incident handling—that is, how to respond to system security breaches. In addition, agencies ensure that employees and contractors receive security awareness training. Guidance from the National Institute of Standards and Technology (NIST) supports FISMA by outlining the necessary components of key security documentation, including security plans, certification and accreditation packages, and security controls testing. For the analysis of alternatives section, OMB’s instructions for the exhibit 300 cite the Clinger-Cohen Act, which requires agencies to complete a cost-benefit analysis for new IT investments, and OMB Circular A-94, which outlines requirements for completing cost-benefit and cost-effectiveness analyses, including the comparison of at least three alternatives, a discussion of assumptions for each alternative, and an analysis of uncertainty (a sensitivity assessment to raise awareness of the potential for unforeseen impacts on the investment). For the project and funding plan section, OMB Circular A-11 provides guidance that requires an agency to have in place a process for monitoring the investment’s status in accomplishing baseline cost and schedule goals. For the 29 investments, agency compliance with the FISMA and NIST requirements described above was mixed. For example, about 86 percent of all investments could demonstrate, based on documentation, that security awareness training had been conducted for employees and contractors and that a mechanism for tracking completion of security awareness training had been established. In addition, 21 of the 22 operational investments (for which information security plans are required) had security plans that addressed areas such as the rules of behavior for system use and technical security controls. In contrast, about 77 percent of these 22 investments did not provide support describing how incident handling activities would be performed at a system level, such as detecting, reporting, and mitigating risks associated with security incidents. While the compliance of security documentation with federal requirements was mixed, the documented support for the analysis of alternatives and the project and funding plan areas of the exhibit 300 provided little assurance that investments complied with applicable guidance and laws. None of the investments had cost analysis documentation that fully complied with Circulars A-94 and A-11 criteria (lacking, for example, a comparison of at least three alternatives, a discussion of assumptions for each alternative, or an analysis of uncertainty). Project officials attributed deficiencies in the analysis of alternatives to, among other things, a lack of understanding of what was expected for reporting in the exhibit 300. In a few instances, officials noted that they believed that their investments were excluded from meeting the federal requirements because the investments were near the end of their operational or, in some cases, useful life cycles. OMB guidance on analysis of alternatives does not differentiate between operational and developmental investments; nonetheless, one agency’s internal guidance explicitly states that no analysis of alternatives is necessary for investments in the steady state (that is, operational). However, a forward-looking analysis of alternatives for operational investments can help agencies recognize when an alternative solution may be more efficient or effective than the current investment, thereby freeing scarce resources to be reallocated. The agencies’ lack of compliance with OMB guidelines for analysis of alternatives, including the cost-benefit analysis, leaves senior executive managers at risk of making poor investment management decisions on incomplete and sometimes inaccurate information. For the project and funding plan section of the exhibit 300, OMB Circular A-11 provides guidance on the information to be provided, which depends upon the state of the investment (i.e., new, mixed life cycle, or steady state). According to this guidance, information presented in the project and funding plan is to be derived from one of two types of analysis: for steady state investments, an operational analysis, and for new and mixed life-cycle investments, an analysis based on an EVM process that is compliant with ANSI/EIA-748-A. Operational analysis is a method for assessing the technical merits of an existing investment in meeting user needs, while EVM is a method for assessing the value of work performed compared to its actual cost during development of an investment. Of the eight steady state investments we reviewed, only two had conducted an operational analysis. Furthermore, only one of those had documented procedures that were in accordance with OMB’s Capital Programming Guide criteria, such as addressing user needs and technical performance. In most cases for which no operational analyses were in place, agency officials commented that OMB guidance describing how to perform an operational analysis was at such a high level of generality that they found it difficult to follow. Instead of attempting to devise and perform an operational analysis, therefore, they implemented variations on an EVM process. However, these implementations of EVM did not address topics required for the operational analysis, such as user needs and technical performance. Unless they address these topics, agencies may not have the information they need to determine, among other things, whether investments are performing as intended and meeting user needs. Similarly, of the 21 new and mixed life-cycle investments required to use EVM, only 6 used an EVM process that generally followed the ANSI standard. Since fiscal year 2002, OMB has required the use of EVM as a project management tool. The ANSI standard is intended to ensure that data produced by an EVM process are reliable so as to allow objective reports of project status, produce early warning signs of impending schedule delays and cost overruns, and ultimately provide unbiased estimates of anticipated costs at completion. If agencies do not implement EVM processes that follow the ANSI standard, they have reduced assurance that the information used for tracking the cost, schedule, and performance of the investment is reliable. For the remaining 15 investments that did not have EVM processes following the required standard, project officials commented that EVM was relatively new to them and that they did not understand how to implement an ANSI-compliant process at the time of the fiscal year 2006 submission. At the time of our review, all five departments stated that they were working toward implementing compliant processes. To OMB’s credit, it recognized the need for improvement in the execution of agencies’ IT projects and has issued clarifying guidance on the implementation of EVM. This guidance, issued in August 2005, could be expected to have an impact on the exhibit 300s prepared for fiscal year 2008. Under this guidance, agencies are instructed, among other things, to develop comprehensive agency policies for using EVM to plan and manage development activities for major IT investments no later than December 31, 2005; include a provision and clause in major acquisition contracts or agency in-house project charters directing the use of an EVM system compliant with the required standard; and provide documentation demonstrating that the contractor’s or agency’s in-house EVM system complies with the required standard and conduct periodic surveillance reviews. Additionally, the Civilian Agency Acquisition Council and the Defense Acquisition Regulations Council published in the Federal Register a proposed amendment to the Federal Acquisition Regulation (FAR Case 2004-019) to standardize EVM contract policy across the government. In previous work, we have reported that EVM can have a significant impact on the success of an IT acquisition because it heightens visibility into whether a program is on target with respect to cost, schedule, and technical performance. Therefore, it is important that the process is implemented properly to maximize its value as a project management tool. If it is not implemented effectively, agency executives and OMB risk making poor investment decisions based on inaccurate and potentially misleading EVM information. Accurate and timely cost management information is critical for federal managers to understand the progress of major projects and vital in developing meaningful links among budget, accounting, and performance. The Federal Financial Management Improvement Act of 1996 emphasizes the need for agencies to have systems that are able to generate reliable, useful, and timely information for decision-making purposes and to ensure accountability on an ongoing basis. In previous work, we have reported on the lack of adherence to federal accounting standards throughout the federal government and have made recommendations that agencies improve cost-accounting systems. At every agency, cost information reported in the 29 exhibit 300s was derived from ad hoc processes rather than from cost-accounting systems with adequate controls to ensure accountability. This condition had impact in two particular areas of the exhibit 300—the summary of spending table and the project and funding plan section: Figures for dollars expended for the prior year (in this case, fiscal year 2004) were not reliable. In all cases, documentation provided to support prior year cost figures in the summary of spending table showed that the information was derived from ad hoc sources, such as spreadsheet estimates, handwritten figures, or e-mails. Therefore, the cost data reported in the exhibit 300 are not verifiable. Information in the project and funding plans was also unreliable for the 21 new and mixed life-cycle investments required to use EVM. As discussed earlier, 15 of these investments reported cost figures based on EVM processes that did not follow the ANSI standard; because the standard was not followed, these processes did not have the controls necessary to ensure that the data they produced were reliable. The other 6 investments had ANSI-compliant EVM processes in place for the contractor component of the investment costs, but the government component of the investment costs was derived from ad hoc systems (such as tracking government costs in spreadsheets based on project managers’ own records); thus, that portion of the data was not reliable, lending a degree of unreliability to the overall EVM reports. The lack of accurate cost figures limits decision makers’ ability to determine the actual resources expended on an investment, and therefore inhibits their ability to make fully informed decisions on whether to proceed. Without reliable systems that meet federal standards, government agencies cannot produce reliable cost-based performance information. The usefulness of the exhibit 300 business case as a mechanism to support the selection and oversight of federal IT investments is undercut by the kinds of weaknesses displayed in the 29 business cases that we reviewed. Although we cannot directly project these examples to the more than one thousand business cases developed each year across the federal government, our results suggest that the issues raised need attention. The shortcomings in guidance and training are likely to be widespread, and so the weaknesses may extend beyond the specific examples identified here. The kinds of weaknesses displayed and the causes behind them are consistent with the pervasive problems with project and investment management that we have documented in numerous prior reports. The absence of documentary support in the cases reviewed raises questions regarding the sufficiency of the justification provided for the investment and undermines the management decisions being made based on it. More troubling, it may indicate an underlying weakness in the management of the investment, particularly since several sections of the exhibit are specifically designed to capture information from systems used in project management, such as those that support EVM and financial management. In many cases, inadequate support raises questions regarding the adequacy of an agency’s management processes and internal controls, which strongly affect the reliability of the information presented to decision makers. Further, in view of the inaccuracies in the cases reviewed, it is evident that agencies are not taking sufficient actions to ensure the accuracy of the information in the exhibit 300s. To make reasonable decisions, management needs to be aware of limitations in the data on which they rely and thus be able to take steps to mitigate the risks involved. Collectively, our findings raise questions on whether fundamental project management processes are in place, whether project managers are adequately trained in these processes, and whether they receive sufficient guidance on these processes and on preparing all areas of the exhibit 300. At a minimum, this situation undermines the usefulness of the exhibit 300 as a mechanism to communicate to OMB and agency executives that the project team has employed the disciplines of good project management. By reporting information that is not supported by documentation, the exhibit 300 can create the misleading appearance that investments are being managed properly, when in fact they are not. In addition, OMB has relied on these exhibits to identify and oversee high-risk projects; thus, our finding that the data being presented to OMB may not be reliable or accurate further complicates its oversight. While OMB is applying more rigor to its oversight processes through such processes as its tracking of high-risk investments, these advances may be undermined by inaccurate or unreliable data used in decision making. Unless these weaknesses are addressed, OMB, agency executives, and Congress will not have assurance that key decisions to pursue and oversee the $65 billion in IT investments are being made based on accurate and reliable information. To improve the accuracy and validity of exhibit 300s for major IT investments and to increase the value of using the information they provide in selection, oversight, and resource allocation decisions, we are making three recommendations. 1. Because decision makers should be aware of any weaknesses in the processes used to develop the information in the exhibit 300s, we are recommending that the Director of OMB direct agencies to determine the extent to which the information contained in each exhibit 300 is accurate and reliable. Where weaknesses in accuracy and reliability are identified, the agency should be required to disclose them and explain the agency’s approach to mitigating them. In addition, to help ensure that agency personnel completing exhibit 300s better understand their responsibilities, we recommend that the Director of OMB take the following additional actions: 2. In advance of OMB’s next issuance of the Circular A-11 update, develop and promulgate clearer and more explicit guidance for sections of the exhibit 300 business case that cause confusion, including addressing weaknesses identified in this report (as indicated below) and consulting with agency personnel having responsibility for completing exhibit 300s across government to identify other areas of confusion. At a minimum, the guidance should do the following: Provide a more detailed description of the requirements for completing an operational analysis, as referred to in the supplement to Part 7 of Circular A-11, the Capital Programming Guide. Address or clarify possible flexibilities and alternative approaches available to agencies in completing their exhibit 300s: for example, whether the analysis of alternatives section of the exhibit 300 needs to be updated every year for steady state investments and whether all risk areas are relevant for all investments. 3. Provide for training of agency personnel responsible for completing exhibit 300s. This training should go beyond a description of changes from prior years’ guidance and include working through examples for a variety of investments. In developing the training, OMB should consult with agencies to identify deficiencies that the training should address. In implementing these recommendations, OMB should work with the CIO Council to develop the necessary guidance and implement an effective training program to ensure governmentwide acceptance of these changes. Because we have outstanding recommendations aimed at enhancing OMB’s audit guidance related to federal cost-accounting systems, we are not making any new recommendations in this report regarding federal cost accounting. We provided a draft of this report to OMB and the five agencies whose exhibit 300s we reviewed. In written comments received on December 23, 2005, the Administrator of OMB’s Office for E-Government and Information Technology accepted the findings of the draft report. OMB described two of our three recommendations and expressed three concerns: first, that our report does not address the need for agencies to ensure the accuracy of their IT investment requests; second, that the report focuses on the way agency employees fill out OMB’s exhibit 300s and not on the underlying management responsibilities; and third, that by directing our recommendations to OMB rather than to the agencies, we could be seen as suggesting that OMB and not the agencies are responsible for data accuracy and employee training. OMB’s concern regarding data accuracy is addressed by our first recommendation: that the Director of OMB instruct agencies to determine the extent to which the information contained in each exhibit 300 is accurate and reliable, to disclose weaknesses, and to describe the agency’s approach to mitigating these weaknesses. This recommendation clearly places responsibility on the agencies for assessing the quality of their budget information and the processes that produced this information. With respect to OMB’s concern that the recommendations do not focus on how well agencies fulfill their underlying information resources management responsibilities, our view is that our recommendation on disclosing and mitigating weaknesses does address these underlying responsibilities. The report specifically addresses the exhibit 300s and the reliability of these documents when used as support in the agencies’ and OMB’s decision-making processes. As our report clearly states, the lack of documentation may indicate an underlying weakness in the management of the investment. In many cases inadequate support raises questions about the investments’ program management and internal controls. Requiring agencies to disclose and mitigate associated weaknesses presupposes that agencies examine and address their approach to fulfilling information resources management responsibilities. Regarding OMB’s third concern, we do not intend to suggest that agencies are not responsible and accountable for the weaknesses we describe. We place significant responsibility on agencies to manage their information assets effectively, as reflected in our first recommendation and in the large number of evaluations that we have previously conducted at individual agencies and the recommendations resulting, some of which are still outstanding. In this report, however, our recommendations are directed to OMB because they address findings relating to OMB-required budget documents, and OMB has statutory responsibility for providing information resources management guidance governmentwide. Regarding OMB’s comment that agencies be held responsible for employee training in information resources management, we agree that agencies are responsible for such training. However, as agencies indicated during the review, additional training by OMB would be helpful, especially in the understanding of OMB’s requirements for the exhibit 300. This is also consistent with OMB’s responsibility under the E-Government Act of 2002 to identify where current training does not satisfy the personnel needs related to information technology and information resource management. The Deputy Associate Chief Information Officer for Information Technology Reform of the Department of Energy provided largely technical comments, which we incorporated as appropriate. The Director of Audit Relations of the Department of Transportation also provided technical comments that were incorporated as appropriate. The Departments of Agriculture, Commerce, and the Treasury provided no comments. The written comments from OMB are reproduced in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of the Departments of Agriculture, Commerce, Energy, Transportation, and the Treasury and the Director of Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286. I can also be reached by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and key contributors to this report are listed in appendix IV. Our objective was to ascertain the extent to which selected agencies have underlying support for the information described in their fiscal year 2006 exhibit 300s as submitted to the Office of Management and Budget (OMB) in September 2004. To address our objective, we reviewed the supporting documentation for 29 exhibit 300s from agencies and components from the Departments of Agriculture, Commerce, Energy, Transportation, and the Treasury. We selected the five departments for our review on the basis of two criteria. First, to ensure that we examined significant investments, we selected departments that expected to spend $1 billion or more on information technology (IT) investments in fiscal year 2006. Second, of those agencies with significant investments, we further narrowed our selection of agencies to those with the first and second largest number of IT investments in each of three categories of the federal government’s Business Reference Model (BRM): Services for Citizens, Support Delivery of Services, and Management of Government Resources. We did this to ensure that the agencies under review reflect the primary business operations performed by the federal government. We excluded the Mode of Delivery Business Area because we found investments in this area to be largely from one agency, the Department of Defense (DOD). (In general, Mode of Delivery describes the mechanisms the government uses to achieve its purposes: Services for Citizens.) (We excluded DOD and the Department of Homeland Security (DHS) from our selection, because the Defense Inspector General recently performed an extensive review of exhibit 300s, and we have both completed and ongoing work on several major IT investments at DHS). This process resulted in the selection of the five departments mentioned above. To make our selection of IT investments from the five departments, we used OMB capital planning and budget documentation to identify a mix of investments. Specifically, we chose IT investments that (1) supported government operations across each of the three BRM business areas identified above and (2) reflected different stages of investment (e.g., new, mixed life cycle, and steady state). Initially, we selected a total of 30 investments (i.e., 6 investments from each department). However, one IT investment was dropped from our total of 30 selected investments because we determined during our review that OMB and the agency had cancelled its funding. To determine the extent of each investment’s underlying support, we developed a set of questions regarding the types of analysis and documentation that were associated with the information provided in each of the major sections of OMB’s exhibit 300. Using our set of questions, we met with agency officials for each selected investment to collect and analyze investment documentation associated with each exhibit 300 area in our evaluation. We further compared the documentation against the exhibit 300 to ascertain whether the documentation agreed with what the investment reported in the exhibit 300. Where federal requirements, laws, and other guidelines were cited in Circular A-11, we also used these to assess the extent to which agencies and components had complied with specific documentation requirements as prescribed in these sources (including National Institute of Standards and Technology (NIST) guidance, OMB circulars, and OMB memorandums). In areas where federal directives were cited in the exhibit 300, we conducted limited reliability testing; these areas included security, analysis of alternatives, and the project and funding plan. In our evaluation of security documentation, we used criteria set forth in NIST guidance to assess whether the major components were present in key documents, which included the security plan and system-level incident handling procedures. For security awareness training, we identified whether training was conducted and tracked but did not assess its content. In our evaluation of the analysis of alternatives, we used criteria from OMB Circular A-94 to assess whether the major components were present in the cost-benefit or cost-effectiveness analysis. In cases where investment managers told us that their earned value management (EVM) processes were in conformance with ANSI/EIA-748-A (for our evaluation of the project and funding plan sections), we used criteria from ANSI/EIA-748-A to assess whether key EVM processes were in place. We did not test the quality of the documentation in these areas of evaluation. Regarding the reliability of cost data, we did not test the adequacy of agency or contractor cost-accounting systems. Our evaluation was based on what we were told by the agencies and the information they could provide (to the extent to which they had information). We performed our work at the agencies’ offices in the Washington, D.C., metropolitan area. We conducted our review between March and November 2005 in accordance with generally accepted government auditing standards. The following provides additional detail on the agencies and investments that we reviewed as part of this audit. We reviewed a total of 29 investments at five departments: Agriculture, Commerce, Energy, Transportation, and the Treasury. The selected departments account for the first and second largest number of IT investments in each of three categories of the federal government’s Business Reference Model: Services for Citizens, Support Delivery of Services, and Management of Government Resources. According to OMB guidance, the life-cycle stage of the asset affects what is reported on the exhibit 300: New investments (i.e., proposed for budget year or later, or in development) must be justified based on the need to fill a gap in the agency’s ability to meet strategic goals and objectives with the lowest life-cycle costs of all possible alternatives and must provide risk- adjusted cost and schedule goals and measurable performance benefits. Mixed life-cycle investments (i.e., investments that are operational but include some developmental effort, such as a technology refresh) must demonstrate satisfactory progress toward achieving baseline cost, schedule, and performance goals using an EVM system. Operational investments (i.e., steady state) must demonstrate, among other things, how close actual annual operating and maintenance costs are to the original life-cycle cost estimates, whether the technical merits of the investment continue to meet the needs of the agency and customers, and that an analysis of alternatives was performed with a future focus. Brief description: This system is expected to automate processes to allow the Department of Agriculture to issue, track, and rapidly verify the validity of a federal permit allowing the importation of plants and animals. It is also expected to assist the public by allowing applicants to apply for permits, check the status of permit applications, and receive permits online. Brief description: This investment is designed to represent the entire portfolio of current corporate financial management and administrative payment systems for the department. It is a corporatewide solution for financial management reform and systems integration that provides tools for program and financial managers to manage and evaluate federal programs. Brief description: This system is intended to be a single enterprisewide acquisition management system to support a strategic and more standardized acquisition management process for Agriculture. It is expected to provide a real-time interface to the department’s core financial system, reliable data, and a shortened time for acquiring goods and services. Brief description: This system is expected to establish a new process to collect and track phytosanitary certificates issued by the department, which attest to compliance with import regulations of importing countries. It is also intended to provide better service to users by reducing the need for repetitive data entry from applicants and enabling certifying officials to deliver certificates in a timelier manner. Brief description: This system is designed to support the annual acquisition, tracking, and distribution of commodities acquired by Agriculture for domestic and foreign food assistance programs by providing financial and program management, reporting, and control to track commodity requests against purchases and distributions from inventory. Brief description: This system is intended to support the department’s Food-Stamp program mission by tracking and monitoring food coupon/electronic benefit redemption activities and regulatory violations by businesses and associated administrative actions related to enforcement of penalties, among other things. This initiative is expected to replace the current legacy system, which has been in place since 1993. Brief description: This system is designed to be an interactive computer system that integrates all meteorological and hydrological data and all satellite and radar data to enable the forecaster to prepare and issue more accurate and timely forecasts and warnings. Brief description: This system is expected to provide an integrated solution to weather and water data archive and access, including an access portal with search, browse, and geospatial capabilities for users to obtain environmental data, contributing to improvements in prediction capabilities. Brief description: This investment is designed to provide statistical programs that count and profile U.S. businesses and government organizations through the gathering of surveys and principal economic indicators in order to conduct research and technical studies. Brief description: The current system is designed to expedite monthly statistics on international trade, remedy shortcomings in export statistics, and help to control the export of weapons or other hazardous items that could be a threat to U.S. national security or public welfare. The proposed initiative is designed to improve the current system to handle electronic filing of all export transactions, incorporate an electronic manifest system, and provide for verification of export information reported on export transactions. Brief description: The current system is designed to collect and distribute raw and processed hydrometeorological data and products, disseminating weather observations and guidance to a national and international community of customers. Improvements to current system are expected to provide sufficient performance, capacity, and catastrophic backup capability to meet current and future demands for data. Brief description: This system is designed to command and control Commerce’s operational environmental satellites and to acquire and manage the weather and water data the satellites collect, in order to provide support functions that are not available commercially, such as real- time hurricane support. Brief description: This project is designed to support scientific research by providing an interoperable, effective, and reliable communications infrastructure and network services to the Department of Energy research facilities. Brief description: This system is expected to be an enterprisewide, integrated document and records management system that will include portal accessibility and integration with knowledge management tools in order to improve decision and service delivery quality and serve as a resource for operations management. Brief description: This system is designed to support the routine collection and reporting needs of Energy for life-cycle planning, budget formulation, and project and budget execution. Brief description: This is a Web-based system that is intended to make relevant documentary material supporting the Nuclear Regulatory License Application available to users, as part of the requirements of the Nuclear Waste Policy Act. Brief description: This investment is intended to identify, design, and implement the systems, processes, and controls related to financial management, human resources, supply chain management, facilities maintenance, information management, project management, and manufacturing in order to lower costs and provide more efficient operations and improved management. Brief description: This investment is intended to provide the Department of Transportation (DOT) with asset management and supply chain management information systems to track and manage over $21 billion in federal government assets. Reducing the number of information systems, optimizing supply chain operations, and streamlining business operations of employees are expected to result in reduced costs to the agency. Brief description: This program is expected to consolidate several major and nonmajor DOT financial systems to interface or integrate all related systems in order to eliminate redundant data and processes. Brief description: This system is designed to collect performance data from over 640 local transit agencies for the purpose of reporting statistical data on the U.S. transit industry. Brief description: This system is intended to provide air pilot/controller voice and data communications by utilizing a digital-based air/ground communication system. Brief description: This system is expected to consolidate the agency’s 28 oversight systems on aviation regulatory compliance into 5 integrated aviation safety risk management systems. Its intended purpose is to allow applicable government agencies and the aviation industry to use common system safety applications and databases for managing and overseeing flight safety. Brief description: This is a navigation system that is designed to provide navigation across the entire United States for all classes of aircraft in all flight operations, including en-route navigation, airport departures, and airport arrivals including precision landing approaches in all weather conditions. Brief description: This system is part of a modernization program that is expected to provide the Department of the Treasury with the capability to manage its tax accounts utilizing new technology, applications, and databases. This system is designed to create applications for daily posting, settlement, maintenance, refund processing, and issue detection for taxpayer tax account and return data to improve customer service and compliance. Brief description: This system is designed to be a financial accounting system for activities associated with Treasury’s debt collection program to track funds recovered by the agency, post these funds to the proper account in an accurate and timely manner, and transfer moneys due to the appropriate government agencies. The system is also designed to record the general ledger activity and produce operational, management, and standard external reports. Brief description: This system is designed to be a front-end processing system that receives, validates, stores, forwards to mainframe electronic filing systems, and acknowledges electronic files containing tax documents. The system is intended to receive returns from third parties, acknowledge the receipt of information, format the information for mainframe processing, provide acknowledgements to the third parties, and send state return data to participating states. Brief description: This system is designed to produce accurate, accessible, and timely governmentwide financial information through the streamlining of reports and the reduction of the reconciliation burden on government agencies in order to minimize the amount of labor necessary to transfer financial information. Brief description: This system is intended to be a data capture, management, and storage system used to process tax documents automatically in order to meet mandated timelines and processing requirements for various tax forms and the Federal Tax Deposits, which directly impacts revenue brought into the federal treasury. Brief description: This system is designed to be a browser-based Internet version of the current Electronic Certification System, which will allow federal program agencies to submit certified requests for payment disbursement online. It is intended to provide a more secure payment process, increase the ability to protect sensitive financial and privacy data, and improve the financial performance of federal program agencies by providing program agencies a method of providing financial data to Treasury. David A. Powner, (202) 512-9286, pownerd@gao.gov. In addition to the contact named above, the following people made key contributions to this report: Carol Cha, Barbara Collier, Joseph Cruz, Lester Diamond, Valerie Hopkins, Sandra Kerr, Linda Lambert, Tammi Nguyen, Chris Owens, Mark Shaw, Kevin Walsh, and Martin Yeung.
Each year, agencies submit to the Office of Management and Budget (OMB) a Capital Asset Plan and Business Case--the exhibit 300--to justify each request for a major information technology (IT) investment. The exhibit's content should reflect controls that agencies have established to ensure good project management, as well as showing that they have defined cost, schedule, and performance goals. It is thus a tool to help OMB and agencies identify and correct poorly planned or performing investments. In its budget and oversight role, OMB relies on the accuracy and completeness of this information. GAO was asked to determine the extent to which selected agencies have underlying support for the information in their fiscal year 2006 exhibit 300s. From five major departments having over $1 billion in IT expenditures in that year, GAO chose for analysis 29 exhibits for projects that supported a cross section of federal activities. Underlying support was often inadequate for information provided in the exhibit 300s reviewed. Three general types of weaknesses were evident. All exhibit 300s had documentation weaknesses. Documentation either did not exist or did not fully agree with specific areas of the exhibit 300. For example, both these problems occurred in relation to calculations of financial benefits for most investments. In addition, for 23 of the 29 investments, information on performance goals and measures was not supported by explanations of how agencies had initially measured their baseline levels of performance (from which they determine progress) or how they determined the actual progress reported in the exhibit 300. Agencies did not always demonstrate that they complied with federal or departmental requirements or policies with regard to management and reporting processes. For example, 21 investments were required to use a specific management system as the basis for the cost, schedule, and performance information in the exhibit 300, but only 6 did so following OMB-required standards. Also, none had cost analyses that fully complied with OMB requirements for cost-benefit and cost-effectiveness analyses. In contrast, most investments did demonstrate compliance with information security planning and training requirements. In sections that required actual cost data, these data were unreliable because they were not derived from cost-accounting systems with adequate controls. In the absence of such systems, agencies generally derived cost information from ad hoc processes. Officials from the five agencies (the Departments of Agriculture, Commerce, Energy, Transportation, and the Treasury) attributed these shortcomings in support to lack of understanding of a requirement or how to respond to it. Agency officials mentioned in particular insufficient guidance or training, as well as lack of familiarity with particular requirements. The weaknesses in the 29 exhibit 300s raise questions regarding the sufficiency of the business cases for these major investments and the quality of the projects' management. Without adequate support in key areas, OMB and agency executives may be depending on unreliable information to make critical decisions on IT projects, thus putting at risk millions of dollars. Further, although the 29 examples cannot be directly projected to the over one thousand business cases developed each year across the federal government, the results suggest that the underlying causes for the weaknesses identified need attention. These weaknesses and their causes are also consistent with problems in project and investment management that are pervasive governmentwide, including at such agencies as the Departments of Defense, Health and Human Services, and Homeland Security, as documented in reports by GAO and others.
IRS engages in hundreds of data-sharing arrangements with state revenue, human services, and law enforcement agencies for tax compliance and other purposes. In a small portion of IRS’s federal-state data-sharing arrangements, states require federal tax compliance to qualify for a state business license. In some instances, state licensing agencies require compliance with both federal and state tax obligations, and requirements can vary among states. These arrangements can vary by industry; by type of taxes required for compliance, such as employment taxes or income taxes; and even by the type of documentation required to prove compliance. For example, in some states the businesses may self-certify that they are in compliance with taxes, and in others businesses must provide documentation from IRS or the state revenue agency that they are in compliance with tax requirements. IRS and California’s DLSE are engaged in an arrangement that requires compliance with federal employment taxes to operate a business in any one of three industries in California. An individual applying for a new business license or a renewal of his/her business license to operate a farm labor contracting, garment manufacturing, or car washing and polishing business must first prove full compliance with federal employment taxes by filing all required federal employment tax returns and resolving all outstanding federal employment taxes through full payment or appeal. Each business license applicant in the three industries requiring federal tax compliance must submit a state business license application and a signed IRS Form 8821, Tax Information Authorization, allowing IRS to disclose the applicant’s tax information to DLSE. IRS tax examiners in Ogden, Utah, review the tax information in IRS’s Integrated Data Retrieval System (IDRS) to check the employment tax status of the applicant. If the applicant is compliant, IRS provides DLSE and the applicant with a statement that the applicant has met all filing and payment requirements. If the applicant has an outstanding employment tax liability, has not filed a federal employment tax return, or both, the tax examiner prompts the system to generate a noncompliance letter, which is sent to the applicant. Applicants with outstanding tax liability can pay the amounts due or contact the IRS tax examiners for more information or to make arrangements for payment. IRS officials told us that IRS Ogden does not collect employment taxes from business applicants directly. Noncompliant business applicants may pay federal employment taxes they owe at local IRS offices or mail their payments to IRS. Ogden officials are notified by e- mail or phone when business applicants have paid the employment taxes identified in the noncompliance letter. IRS informs DLSE that an applicant has paid all employment taxes, is currently working with a revenue officer to pay all balances due, or is otherwise compliant. After notification, DLSE will issue the applicant’s business license. DLSE officials told us that for purposes of business licensing, California business applicants in the three industries have resolved their taxes if they have (1) paid their tax liability, (2) entered into installment agreements, or (3) completed offers in compromise with IRS. If an applicant’s tax case is in bankruptcy, California’s DLSE makes the decision on whether to issue the business license. If an applicant does not resolve his/her tax liability within 90 days of applying, IRS staff in Ogden send the cases to the IRS Agricultural Team in Fresno, where tax examiners open the case and do investigative work on collecting the balance due. See figure 1 on how data sharing between IRS and California operates. Section 6103 of the Internal Revenue Code (I.R.C.) prohibits the disclosure of tax returns and return information by IRS employees; other federal employees, state employees, or both; and certain others having access to the information except in specifically enumerated circumstances. Data sharing between IRS and California DLSE is authorized by a subsection of I.R.C. § 6103. Specifically, section 6103(c) authorizes IRS to disclose the return information of a taxpayer to any other person at the taxpayer’s request. State licensing entities that wish to review federal tax information or have IRS attest to tax compliance before issuing the license would need to require the applicant to provide a written request to IRS authorizing release of the information to the licensing entity. The data-sharing arrangement between IRS Ogden and California DLSE can be a valuable tool for improving compliance among certain businesses. According to IRS officials, this type of data-sharing arrangement has mutual benefits for IRS, by increasing filing and payment compliance with federal employment taxes, and for states, by minimizing concerns about the success of the business and its compliance with unemployment requirements. IRS officials noted that growth in this data- sharing arrangement can generate many compliance benefits with a relatively minimal resource allocation. According to a California DLSE official, this data-sharing arrangement is beneficial because it helps to ensure that businesses are competent and responsible and pay their taxes. The amount of revenue in federal employment taxes collected through this data-sharing arrangement appears to outweigh the cost of operating the data-sharing arrangement. Thousands of California businesses apply for a business license each year in order to operate a business in the three industries previously mentioned, and must provide documentation to DLSE to show that they are in compliance with federal employment taxes. Many of these businesses were not in compliance with employment taxes during the time of our analysis and, therefore, had to file tax returns or pay employment taxes to rectify their compliance status. According to the IRS Ogden database on business applicants, 7,194 businesses applied for a business license in the three industries one or more times from calendar years 2006 through 2008 and requested that IRS provide California with information on their compliance with federal employment taxes. About 24 percent of businesses (i.e., 1,726 of the 7,194 that applied) had to file employment tax returns or pay or otherwise resolve overdue taxes to come into compliance with federal employment taxes. IRS staff in Ogden use spreadsheets to track the number of federal tax returns filed by noncompliant California business license applicants and the amounts IRS collected from these businesses that are attributable to the data-sharing arrangement. The spreadsheets show that businesses not in compliance with federal employment taxes when they applied for California business licenses filed hundreds of tax returns in calendar years 2006 and 2007 and IRS collected millions in federal employment taxes. California businesses filed 441 employment tax returns to come into compliance to qualify for California business licenses and IRS collected nearly $7.4 million in employment taxes, according to IRS Ogden spreadsheets. IRS Ogden officials told us that the nearly $7.4 million in employment taxes collected represents the amount business applicants paid after receiving noncompliance letters related to their DLSE business license applications. Table 1 shows the number of tax returns filed and the amount IRS collected from these applicants during calendar years 2006 and 2007. Even though IRS did not track all of the costs it incurred for operating the data-sharing arrangement, IRS officials noted that the arrangement resulted in high revenues relative to costs. In order to get some perspective on how this data-sharing arrangement compares with other IRS enforcement efforts, we developed an estimate of the costs of the arrangement using cost categories provided and confirmed by IRS officials. The cost categories we considered included personnel costs and nonpersonnel costs, such as computers, telephones, and fax machines. We estimated that IRS incurred about $331,348 to operate the data-sharing arrangement in calendar years 2006 and 2007. Our cost estimate included personnel costs of about $202,125 in pay and about $61,042 in benefits for one General Schedule (GS) 5 clerk and two GS 7 tax examiners. We also included about $10,197 for three computers, $1,237 for one dedicated printer, $313 for one fax machine with a dedicated line and two dedicated phone lines with voice mail boxes, and approximately $56,435 for supplies, facilities, utilities, and supervision. These costs may be somewhat overstated because, for instance, we used approximate purchase costs for equipment and did not spread those costs over the useful life of the equipment or other uses of the equipment. Using our estimate, the ROI for this data-sharing arrangement is 22:1. IRS has not tracked the cost data needed to do a study comparing the ROI of the IRS Ogden/DLSE enforcement activity with those of other current enforcement activities to determine how the IRS Ogden/DLSE data-sharing arrangement ROI compares with those of IRS’s other enforcement activities. However, IRS has developed ROI estimates for five new direct revenue-producing enforcement initiatives it proposed in its fiscal year 2009 budget submission. IRS estimates that the average ROI for these activities at full performance (at the end of their second year of implementation) will be 7.1:1. IRS projects the highest ROI for one of the five new initiatives (expanded document matching) at 11.4:1. IRS officials told us that IRS calculates the ROI each year for the revenue-producing initiatives included in the President’s budget request. They also said that these ROI calculations are based on historical information in the Enforcement Revenue Information System and the annually updated unit cost rates used in budget formulation. We did not verify the accuracy of the data IRS provided or its estimate of the revenues and costs of its five new enforcement activities, including the estimated ROI of these activities. We identified the 2,017 businesses that applied for business licenses in calendar year 2006 only, and found that 315 of these businesses had unpaid assessments at the time of applying and that tax compliance improved for these 315 businesses. We identified the businesses that applied for a California business license in 2006 only so that we could follow the tax compliance of this set of specific businesses over time. We matched data of California business license applicants for calendar year 2006 from the IRS Ogden Access database with IRS’s Unpaid Assessments file at two points in time—for the week of September 18, 2006, and the week of August 18, 2008. Our analysis of California business license applicants matched against the IRS Unpaid Assessments file database showed that 315 businesses owed employment taxes as of September 18, 2006, and by August 18, 2008, 165 of those businesses had resolved or lowered their unpaid assessment debt. The 165 businesses resolved or lowered their 2006 unpaid assessments in either calendar year 2007 or 2008. Our analysis also revealed that 150 businesses had not resolved or reduced unpaid assessment debt by August 18, 2008. Table 2 shows business license applicants with unpaid assessments as of September 18, 2006, and businesses that resolved/did not resolve their debt by August 18, 2008. The 165 businesses resolved or lowered their unpaid assessments by nearly $2 million—$1,925,162—from the weeks of September 18, 2006, through August 18, 2008. The 115 business license applicants that completely resolved their debt before August 18, 2008, resolved nearly $800,000 in unpaid tax assessments in calendar years 2007 and 2008. These applicants, in total, had a nearly $800,000 tax liability as of September 18, 2006, but the unpaid assessments file showed no tax liability for them as of the week of September 18, 2008. Fifty additional businesses lowered their tax assessments from the weeks of August 18, 2006, through September 18, 2008, by $1,135,216. However, the 165 businesses may have resolved more than $1,925,162 because these taxpayers may have had additional taxes assessed after the week of September 18, 2006, and may have resolved them before the week of August 18, 2008. Our analysis compares unpaid assessments at two points in time since the data file we used did not allow us to track weekly changes in the businesses’ unpaid assessments. Table 3 shows the amount of unpaid tax assessments as of the week of September 18, 2006, and as of the week of August 18, 2008, and the amount of unpaid employment tax assessments resolved by the 165 businesses. All but 1 of the 350 businesses that had unpaid assessments when they applied for business licenses in calendar year 2006 were small businesses. The remaining business was a medium or large business. According to IRS, “small businesses” includes businesses with assets of less than $10 million. The IRS Ogden/California DLSE data-sharing arrangement can be a valuable compliance tool because the requirement to renew business licenses annually provides a motivation to resolve tax debts timely. The arrangement may help flag unpaid tax assessments when they are recent and have a greater likelihood of collection. Our previous work found that the age of the unpaid assessment is an indicator of the extent to which the outstanding amounts owed are likely to be collected. This work showed that the older an unpaid assessment the lower the probability it will be paid. In another report, we found that the IRS records we examined showed that 70 percent of all unpaid payroll taxes—estimated at $58 billion as of September 30, 2007—were owed by businesses with more than a year (4 tax quarters) of unpaid federal payroll taxes. Over a quarter of unpaid federal payroll taxes were owed by businesses that accumulated tax debt for more than 3 years (12 tax quarters). One reason why older debts may not be collected is that they lead to large and increasing amounts of accrued interest and penalties. The requirement that annual business license renewals depend on resolving unpaid employment tax assessments may help businesses avoid the pyramiding of interest and penalties. The ROI for these enforcement activities can vary depending on factors such as the efficiency of operating the data-sharing arrangements and whether the data-sharing arrangements experience higher collections in the early years of operation. For example, an IRS official suggested that there may be a way to more efficiently operate this type of data-sharing arrangement and thereby obtain a higher ROI. Applying an automated filter to isolate business applicants with no taxes due, IRS staff would only need to manually review data on businesses with balances due or that have not filed required returns, and fewer IRS staff may be needed to operate the data-sharing arrangement. An IRS official in Ogden told us that the taxes collected by the IRS Ogden/California DLSE data-sharing arrangement about 10 years ago were substantially higher because there was very little compliance when the data-sharing arrangement first started. This official recalled that when the program was transferred to Ogden the numbers of noncompliant applicants were at least three or four times higher than they are now. In this official’s view, consistency in enforcing the business tax compliance requirement of the three industries has steadily improved compliance and has resulted in fewer business applicants that are noncompliant with their employment tax obligations when they apply for business licenses. Given that our analysis indicates that the California business licensing requirement likely has a higher ROI than the direct revenue-producing enforcement initiatives IRS proposed in its 2009 Congressional budget submission, a fuller examination is warranted. A more complete evaluation could address some potential factors that could reduce or increase the ROI we calculated. For example, it could evaluate whether IRS had taken other enforcement actions against the California businesses at the same time as they were applying for licenses. If IRS had sent collection notices to the businesses or taken other enforcement action at or close to the time the businesses went through the licensing reviews, the resolution of their debts might be attributable to those enforcement actions. A more complete evaluation could also compare the results of this enforcement approach to the results for similar businesses that were not subject to the business licensing requirement. Such an analysis could help demonstrate how well the business licensing requirement fares compared to the “normal” enforcement actions that would be taken by IRS with similarly situated businesses. A more complete evaluation could take into account the resolution of debts that may have been incurred before a business applied for a license. Since the affected businesses know they must resolve their employment tax debts in order to receive business licenses, some may pay or otherwise resolve their debts in anticipation of the licensing review by IRS. Any such advance payments or resolutions could be included. Similarly, although our tracing of businesses’ compliance from calendar years 2006 to 2008 shows improvement in the resolution of many firms’ debt, a more complete evaluation could compare their improvement to similarly situated businesses that were not subject to the licensing requirement. Such a comparison would help show whether this continued improvement in the delinquent debts was better than what could have occurred absent the licensing requirement. During the period we reviewed, Ogden staff responsible for the business licensing reviews discarded older operational data when they were no longer needed for their purposes. Further, a few months of data were lost even before they would have normally been discarded. Although these data may not be needed to administer the program, they are needed to support a more complete review of the program’s ROI. We contacted revenue officials in every state and the District of Columbia to ask whether their states have business licensing requirements and, if so, whether they require demonstration of state tax compliance before business licenses are granted. Of the 47 states and the District of Columbia that responded, 20 revenue officials told us that the states require demonstration of compliance with one or more state taxes for businesses to qualify for state business licenses, and these requirements exist for one or more industries. Based on these responses, the tax compliance requirement is typically limited to a few industries, requires compliance with selected taxes, and varies on the amount of documentation required to show compliance with tax requirements. Table 4 summarizes responses on the number of states that require businesses to be tax compliant to qualify for state business licenses in one or more industries as of April 6, 2009. Of the 19 states and the District of Columbia that require business applicants to be compliant with state taxes, only the District of Columbia requires applicants in all industries to be state tax compliant to qualify for business licenses. Nineteen states require business license applicants in one or more industries to be state tax compliant to qualify for business licenses. Most of the states that responded do not require compliance with three types of taxes, employment, income, or sales and use, except for the District of Columbia, which requires compliance with all three types of taxes. Of the 19 states that identified compliance with state taxes to qualify for a state business license, 15 identified the specific type of tax or taxes being reviewed. Seven states require compliance with their employment taxes, 8 states require compliance with state sales and use taxes, and 10 states require compliance with state income taxes. State requirements also vary in the amount and kind of documentation required to prove compliance with tax requirements. For example, Rhode Island allows businesses to self-certify that they are in compliance with state tax requirements. Pennsylvania requires licensing agencies to request verification of state tax compliance from the state tax agency when a business owner applies for or renews a license. IRS maintains information on data-sharing arrangements that include requirements for federal tax compliance to qualify businesses for state business licenses. According to an IRS document, 13 data-sharing arrangements exist that require compliance with federal taxes to qualify for state business licenses. For example, the State of Oregon requires farm/forest labor contractors comply with federal and state taxes to qualify for state business licenses. Each farm/forest labor contractor applicant must submit IRS Form 8821 with the application. In addition, applicants can be denied licenses if state and federal taxes are owed. Similarly, the State of Connecticut requires applicants for gaming licenses to be compliant with federal and state income taxes to qualify for business licenses. Applicants for gaming licenses must submit complete copies of their most recent federal and state income tax returns and certify that there are no outstanding tax delinquencies or unresolved disputes. While most states told us that they do not track tax collections and program costs associated with these data-sharing arrangements, those revenue officials that provided comments and participate in data-sharing arrangements requiring state tax compliance told us that the state arrangements improve state tax collections and promote voluntary compliance. For example, a revenue official said that data sharing is used as another tool to collect outstanding taxes due the state. This official also said that the data-sharing arrangement has been very effective in furthering the state’s tax collection efforts. He added that without a license, a business cannot operate. Another state revenue official told us that the individual or business must keep all state tax obligations current in order to prevent the denial or revocation of the applicable license. In this official’s view, this causes the affected individuals or businesses to be less likely to have delinquent returns and outstanding tax bills that are not on payment agreements. Our analysis shows that 19 states and the District of Columbia allow the taxpayer to obtain a business license if the taxpayer sets up a payment agreement with the state’s revenue agency. IRS staff in Ogden told us that requiring tax compliance makes the businesses think about the consequences of not being tax compliant. They added that the data-sharing arrangement itself becomes a deterrent after a while, since businesses, for which IRS is checking compliance, learn that they cannot get licenses without being compliant. While some state revenue officials see benefit in requiring tax compliance to qualify for a business license, they recognize certain challenges their agencies face from linking state business licensing with tax compliance. One of the challenges is coordination between state agencies. For example, a revenue official said that obtaining key information from state agencies, such as tax identification numbers and licensee names, and acting on her agency’s request to suspend the licenses are ongoing challenges. A 2008 state study noted that agency coordination is crucial to the success of any tax clearance program. In order for the program to be effective, each agency must be prepared to share information with other agencies and to act on information received from other agencies. Finally, states also face technical issues with linking tax compliance with business licensing. For example, a revenue official said that her state does not have electronic linking between its Division of Alcoholic Beverages and Tobacco and the Department of Revenue for verification of applicant sales and use tax information. This official also said that the challenge would be to link their present licensing computer system with the Department of Revenue system. Another official said that her agency’s computer programs vary, are outdated, and are not integrated. Some challenges identified by states likely would be especially important for any expansion of requirements for federal tax compliance to obtain state business licenses. For example, some of the revenue officials we contacted identified legal issues relating to data sharing. A state revenue official said that various licensing statutes do not permit the revocation or threat of action against a licensee due to tax noncompliance. An additional revenue official said that with limited resources, implementing the legal requirement to review licenses is difficult. The potential to increase data-sharing opportunities between IRS and state business licensing entities exists, but pinpointing the exact number of opportunities is difficult. According to the Small Business Administration, business licensing requirements vary from state to state. For example, a state “business license” is the main document required for tax purposes and conducting other basic business functions. However, some states have separate licensing requirements based on the product sold, such as licenses to sell liquor, lottery tickets, gasoline, or firearms. Ultimately, it is up to each state to determine what industries, occupations, and professions must be licensed and the licensure requirements that applicants must meet. Some states and some business types may represent more of an immediate opportunity for establishing arrangements that require federal tax compliance to qualify for state business licenses. States that currently require compliance with state taxes for selected business license applicants may be more amenable to requiring federal tax compliance than states that do not even require state tax compliance since they already recognize tax compliance as important for the businesses. For example, North Carolina, Texas, and Missouri have a requirement for tax compliance with state taxes for retail sales businesses. These states do not require compliance with federal taxes. In addition, states that currently require compliance with federal employment taxes may be amenable to extending the requirement to include federal income taxes. For example, California’s DLSE requires applicants for the three industries requiring licensing to be compliant with federal employment taxes only. California’s garment manufacturing, farm labor contracting, and car washing and polishing license applicants have no requirement to be in compliance with federal income taxes to qualify for business licenses. Increasing data sharing between IRS and state governments to help reduce the tax gap can be beneficial to IRS when such data-sharing arrangements demonstrate firm compliance value. Data-sharing arrangements requiring tax compliance among business license applicants show real potential to be a valuable tool to improve tax compliance among certain businesses. Our estimated ROI of the data-sharing arrangement between IRS Ogden and California DLSE suggests that requiring tax compliance to qualify for state business licenses can be a cost-effective way of collecting tax debt. In fact, the data-sharing arrangement’s estimated ROI is higher than the estimated ROI for the new direct revenue–producing tax enforcement initiatives in IRS’s fiscal year 2009 budget submission. However, a more complete evaluation could take into account all the factors that could affect ROI. To be in a better position to evaluate these data-sharing arrangements, IRS needs to ensure that program data are retained. We recommend that the Commissioner of Internal Revenue take the following three actions: Collect and retain the cost and revenue data needed to develop ROI estimates for programs requiring businesses to demonstrate federal tax compliance to obtain state business licenses. Evaluate the ROI of existing arrangements where states require federal tax compliance to qualify for state business licenses to determine whether the ROI of these programs is sufficient to merit their expansion. To the extent that existing data-sharing arrangements have a sufficiently high ROI, coordinate with states to expand requirements to comply with federal taxes to qualify for state business licenses and monitor the ROI of these expansions to gauge their success. On behalf of the Commissioner of Internal Revenue, the Deputy Commissioner for Services and Enforcement provided written comments on a draft of this report in a June 8, 2009, letter. The Deputy Commissioner agreed with our recommendations. IRS plans to gather appropriate data to develop ROI estimates for this program, evaluate the results to determine whether these programs merit expansion, and if so, work with states to expand the programs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 from the report date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our objectives were to analyze (1) the extent to which requiring a demonstration of federal tax compliance to qualify for a state business license has the potential to improve federal tax compliance and (2) what opportunities exist for increasing arrangements that require federal tax compliance to qualify for state business licensing. This report focuses on data-sharing arrangements that require compliance with federal or state tax obligations to qualify for state business licensing. We did not include licensing requirements at the local level or licensing for professions or occupations. To provide background on data-sharing arrangements that require compliance with tax obligations to qualify for state business licensing, we reviewed relevant Internal Revenue Service (IRS) and California Department of Industrial Relations, Division of Labor Standards Enforcement (DLSE) documents and interviewed IRS and California officials. We also reviewed laws and regulations related to taxpayer disclosure. To determine the extent to which requiring a demonstration of federal tax compliance to qualify for a state business license has the potential to improve federal tax compliance, we used the IRS and State of California data-sharing arrangement as a case study. To determine the potential for improving federal tax compliance, we estimated the return on investment (ROI) for the IRS Ogden/California DLSE data-sharing arrangement. To determine the amount collected, we used IRS Ogden/California DLSE spreadsheets that record the number of federal tax returns filed by applicants for business licenses in the three industries and the amount IRS collected from businesses that were notified by IRS that they were not in compliance with federal employment taxes covering calendar year 2006 and 8 months in calendar year 2007. Spreadsheet data for calendar year 2007 excluded 4 months. Ogden misplaced data for July and August, and data for November and December were not available when we obtained the data in November 2007. We also reviewed agency agreements and memoranda, regulations, and reports covering the data-sharing arrangement between IRS and California DLSE and interviewed IRS and California officials about the value of the data-sharing arrangement to IRS and the state. To determine the cost of operating this data-sharing arrangement, we estimated the costs of collecting the amounts owed by noncompliant businesses using actual cost categories provided by IRS Ogden officials. We used the guidance for preparing agency budgets in Executive Office of the President, Office of Management and Budget, Circular A-11, Preparation, Submission, and Execution of the Budget, and our Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, GAO-09-3SP (Washington, D.C.: March 2009). We estimated personnel costs using the Office of Personnel Management Salary Table 2009-RUS, for the locality pay area “of rest of U.S.,” effective January 2009, for General Schedule (GS) 5 and 7 personnel at step 5. We used step 5 to capture the midpoint of the GS 5 and 7 grade levels so as not to bias pay in the direction of a low or high estimate. We estimated the cost of benefits for these employees using the Department of Labor’s Bureau of Labor Statistics 30.2 percent average compensation cost for calendar year 2008. We estimated the cost of fax machines and printers by averaging the costs for these items as shown on the Web sites for federal government customers of two leading manufacturers of these products. We shared our estimates with IRS officials to obtain concurrence with our estimates of nonpersonnel costs. We did not analyze the taxes IRS may collect or the costs it may incur after the noncompliant cases leave Ogden. We then compared the ROI ratio for this data-sharing arrangement to IRS’s estimates for five revenue-producing enforcement initiatives in the IRS fiscal year 2009 budget submission. The Ogden Service Center sends information on the taxpayers that have unpaid assessments 90 days after Ogden first receives their application materials to the Fresno Service Center for collection. To determine whether California businesses remained in compliance over time, we matched data on California business applicants from an Access database maintained by IRS Ogden with taxpayers in IRS’s Business Master File Unpaid Assessments file. We selected the businesses that according to the Access database, applied for California DLSE business licenses in calendar year 2006 only—that is, applied in calendar year 2006 and did not reapply in calendar year 2007 or in the 10 months in 2008 for which we have IRS Ogden DLSE Access database data. The Access database contained January 2006 through October 2007 data on business license applicants in the three covered industries. For our analysis, we matched records of the California businesses that we selected from the Access database because they applied in calendar year 2006 only with IRS’s Unpaid Assessments file as of the weeks of September 18, 2006, and August 18, 2008; identified the number of businesses with unpaid assessments and the amounts of their tax debt as of the week of September 18, 2006; and identified the applicants for business licenses in 2006 only that had resolved their unpaid assessments as of August 18, 2008, and the amounts of their tax debt they resolved. Our analysis compares unpaid assessments at two points in time since the data file we used did not allow us to track weekly changes in the businesses’ unpaid assessments. We could not follow the tax compliance of earlier applicants into the present because IRS Ogden purged data from the Access database for calendar years earlier than 2006. Unpaid assessments in this report include the total tax assessment plus interest and penalties where these exist. To determine what opportunities exist for increasing data sharing for arrangements that require federal tax compliance to qualify for state business licensing, we (1) analyzed and summarized which states and the District of Columbia have data-sharing arrangements that require state tax compliance to qualify for state business licensing, which states do not have such arrangements, and which states do not require businesses to obtain business licenses on the state level; (2) contacted revenue officials in 50 states and the District of Columbia via e-mail with structured questions about the extent to which their states engage in data-sharing arrangements that require demonstration of tax compliance before business licenses are granted; and (3) sent a follow-up e-mail to 21 state revenue officials who confirmed that their states require applicants to be compliant with state taxes to qualify for business licenses, by requesting information on the amount of taxes collected, the costs associated with operating the data-sharing arrangements, and benefits of these data- sharing relationships to the states. Three states did not respond to our structured questions about the extent to which their states engage in data- sharing arrangements that require demonstration of tax compliance before businesses qualify for business licenses. We also (1) summarized IRS information on existing data-sharing arrangements between IRS and state agencies that require compliance with federal taxes to qualify for state business licensing, (2) interviewed IRS officials to determine which states have state licensing requiring federal tax compliance, and (3) reviewed IRS documentation on data-sharing arrangements between IRS and state agencies that require businesses to demonstrate federal tax compliance to qualify for state business licensing. Our review was subject to some limitations. We did not verify the accuracy of the data IRS provided or its estimate of the revenues and costs of its five new enforcement activities, including the estimated ROI of these activities. IRS Ogden’s Access database on California business applicants did not contain data prior to calendar year 2006 because IRS Ogden purged these historical data. While data from previous years would be useful for evaluating the data-sharing arrangement, we believe that the Ogden records that were available were sufficient to attain an understanding of the potential value of this arrangement as a compliance tool. The IRS Ogden spreadsheet used to track the number of federal tax returns filed by noncompliant California business license applicants and the amount IRS collected from these businesses attributable to the data- sharing arrangement did not contain data for the months of July, August, November, and December 2007. We acknowledge that data for the entire calendar year of 2007 would affect the number of tax returns filed and the amount IRS collected from applicants during those months. Our estimate of the cost of IRS and California’s data-sharing arrangement may be somewhat overstated because, for instance, we used approximate purchase costs for equipment and did not spread those costs over the useful life of the equipment or other uses of the equipment and used calendar year 2009 costs. Additionally, our analysis did not address some other potential factors that could reduce, or increase, the ROI we calculated. We did not verify the responses from the states about tax compliance to qualify for state business licenses. We recognize that the state revenue officials may not be knowledgeable about all of their states’ requirements for tax compliance to qualify for business licenses, but they are a credible source of information about state tax compliance to qualify for state business licenses. We conducted this performance audit from June 2007 through June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Signora J. May, Assistant Director; Amy R. Bowser; Jennifer K. Echard; Amy C. Friedlander; Arthur L. James, Jr.; Stuart M. Kaufman; Edward J. Nannenhorn; Lou V. B. Smith; Jessica Thomsen; and James J. Ungvarsky made key contributions to this report.
The California Department of Industrial Relations, Division of Labor Standards Enforcement (DLSE), requires applicants for California business licenses in three industries--farm labor contracting, garment manufacturing, and car washing and polishing--to be in compliance with federal employment tax obligations to qualify. Based on questions about whether the Internal Revenue Service (IRS) is fully using data from state and local governments to reduce the tax gap, GAO was asked to analyze (1) the extent to which requiring a demonstration of federal tax compliance to qualify for a state business license has the potential to improve federal tax compliance and (2) what opportunities exist for increasing arrangements that require federal tax compliance to qualify for state business licensing. To address these objectives, GAO analyzed IRS administrative and tax data. GAO identified California as a case study. GAO interviewed IRS and state officials and contacted revenue officials in the 50 states and the District of Columbia. The California requirement that three types of businesses be in compliance with federal employment taxes to obtain a state business license shows promise as a valuable tool for improving federal tax compliance. According to data from IRS, of 7,194 businesses that applied for a California business license one or more times from calendar years 2006 through 2008 about 24 percent had to file employment tax returns or pay overdue taxes to come into compliance with federal employment taxes. California businesses filed 441 employment tax returns and IRS collected nearly $7.4 million in current dollars in employment taxes in calendar year 2006 and in 8 months of calendar year 2007. GAO estimated that IRS incurred about $331,348 to operate the data-sharing arrangement for this period. Using this cost estimate, the ROI for this arrangement is 22:1. IRS has not tracked the cost data needed to compare the ROI of the IRS-DLSE enforcement activity with other current enforcement activities. However, IRS's highest estimated ROI among five new direct revenue-producing enforcement initiatives proposed in its fiscal year 2009 budget was 11.4:1. Tax compliance among businesses after they applied for state business licenses showed continued improvement. GAO identified 2,017 businesses that applied for business licenses in calendar year 2006 only and found that 315 of these businesses had unpaid assessments as of September 18, 2006. By August 18, 2008, 165 of these businesses had resolved or lowered their unpaid assessment debt by $1,925,162. All but 1 of the 350 businesses that had unpaid assessments when they applied for business licenses in calendar year 2006 were small businesses. GAO's analysis, although showing a promising ROI, did not take into account certain factors, such as whether other tax collection activities were in process for the businesses that applied for licenses. Many opportunities exist to require federal tax compliance to qualify for state business licenses. GAO contacted revenue officials in every state and the District of Columbia to ask whether their states require tax compliance for business licenses. For the 48 respondents, 20 revenue officials said that their states require compliance with state taxes to obtain a state business license, and that these requirements exist for one or more industries. Twenty said that their states do not have such a requirement; 8 said that their states have no business license requirement at the state level. According to IRS, arrangements exist with 13 states that require compliance with one or more federal taxes to qualify for a state business license. Varying licensing requirements from state to state and lack of uniformity among states in categorizing a license as a "business license" make pinpointing the exact number of opportunities difficult. States that currently require compliance with state taxes for selected business license applicants may represent more of an immediate opportunity for establishing arrangements that require federal tax compliance to qualify for a state business license since they already see tax compliance as important for the businesses. Some challenges, such as a lack of current legal authority in some states to link businesses to tax compliance, would need to be addressed if requiring federal tax compliance for state business licenses is to be expanded.
Recruitment and retention of adequate numbers of qualified workers are major concerns for many health care providers today. While current data on supply and demand for many categories of health workers are limited, available evidence suggests emerging shortages in some fields, for example, among nurses and nurse aides. Many providers are reporting rising vacancy and turnover rates for these worker categories. In addition, difficult working conditions and dissatisfaction with wages have contributed to rising levels of dissatisfaction among many nurses and nurse aides. These concerns are likely to increase in the future as demographic pressures associated with an aging population are expected to both increase demand for health services and limit the pool of available workers such as nurses and nurse aides. As the baby boom generation ages, the population of persons age 65 and older is expected to double between 2000 and 2030, while the number of women age 25 to 54, who have traditionally formed the core of the nursing workforce, will remain virtually unchanged. As a result, the nation may face a caregiver shortage of different dimensions from those of the past. Nurses and nurse aides are by far the two largest categories of health care workers, followed by physicians and pharmacists. While current workforce data are not adequate to determine the magnitude of any imbalance between supply and demand with any degree of precision, evidence suggests emerging shortages of nurses and nurse aides to fill vacant positions in hospitals, nursing homes, and other health care settings. Hospitals and other providers throughout the country have reported increasing difficulty in recruiting health care workers, with national vacancy rates in hospitals as high as 21 percent for pharmacists in 2001. Rising turnover rates in some fields such as nursing and pharmacy are another challenge facing providers and are suggestive of growing dissatisfaction with wages, working environments, or both. There is no consensus on the optimal number and ratio of health professionals necessary to meet the population’s health care needs. Both demand and supply of health workers are influenced by many factors. For example, with respect to registered nurses (RN), demand not only depends on the care needs of the population, but also on how providers— hospitals, nursing homes, clinics, and others—decide to use nurses in delivering care. Providers have changed staffing patterns in the past, employing fewer or more nurses relative to other workers at various times. National data are not adequate to describe the nature and extent of nurse workforce shortages nor are data sufficiently sensitive or current to allow a comparison of the adequacy of nurse workforce size across states, specialties, or provider types. With respect to pharmacists, there are also limited data available for assessing the adequacy of supply, a situation that has led to contradictory claims of a surplus of pharmacists a few years ago and a shortage at the present time. While several factors point to growing demand for pharmacy services such as the increasing number of prescriptions being filled, a greater number of pharmacy sites, and longer hours of operation, these pressures may be moderated by expanding access to alternative dispensing models such as Internet and mail-order delivery services. Recent studies suggest that hospitals and other health care providers in many areas of the country are experiencing increasing difficulty recruiting health care workers. A recent 2001 national survey by the American Hospital Association reported an 11 percent vacancy rate for RNs, 18 percent for radiology technicians, and 21 percent for pharmacists. Half of all hospitals reported more difficulty in recruiting pharmacists than in the previous year, and three-quarters reported greater difficulty in recruiting RNs. Urban hospitals reported slightly more difficulty in recruiting RNs than rural hospitals. However, rural hospitals reported higher vacancy rates for several other types of employees. Rural hospitals reported a 29 percent vacancy rate for pharmacists and 21 percent for radiology technologists compared to 15 percent and 16 percent respectively among urban hospitals. A recent survey in Maryland conducted by the Association of Maryland Hospitals and Health Systems reported a statewide average RN vacancy rate for hospitals of 14.7 percent in 2000, up from 3.3 percent in 1997. The Association reported that the last time vacancy rates were at this level was during the late 1980s, during the last reported nurse shortage. Also in 2000, Maryland hospitals reported a 12.4 percent vacancy rate for pharmacists, a 13.6 percent rate for laboratory technicians, and 21.0 percent for nuclear medicine technologists. These same hospitals reported taking 60 days to fill a vacant RN position in 2000 and 54 days to fill a pharmacy vacancy in 1999. Several recent analyses illustrate concerns over the supply of nurse aides. In a 2000 study of the nurse aide workforce in Pennsylvania, staff shortages were reported by three-fourths of nursing homes and more than half of all home health care agencies. Over half (53 percent) of private nursing homes and 46 percent of certified home health care agencies reported staff vacancy rates higher than 10 percent. Nineteen percent of nursing homes and 25 percent of home health care agencies reported vacancy rates exceeding 20 percent. A recent survey of providers in Vermont found high vacancy rates for nurse aides, particularly in hospitals and nursing homes; as of June 2000, the vacancy rate for nurse aides in nursing homes was 16 percent, in hospitals 15 percent, and in home health care 8 percent. In a recent survey of states, officials from 42 of the 48 states responding reported that nurse aide recruitment and retention were currently major workforce issues in their states. More than two-thirds of these states (30 of 42) reported that they were actively engaged in efforts to address these issues. Rising turnover rates in many fields are another challenge facing providers and suggest growing dissatisfaction with wages, working environments, or both. According to a recent national hospital survey, rising rates of turnover have been experienced, particularly in nursing and pharmacy departments. Turnover among nursing staff rose from 11.7 percent in 1998 to 26.2 percent in 2000. Among pharmacy staff, turnover rose from 14.6 percent to 21.3 percent over the same period. Nursing home and home health care industry surveys indicate that nurse turnover is an issue for them as well. In 1997, an American Health Care Association (AHCA) survey of 13 nursing home chains identified a 51-percent turnover rate for RNs and licensed practical nurses (LPN). A 2000 national survey of home health care agencies reported a 21-percent turnover rate for RNs. Many providers also are reporting problems with retention of nurse aide staff. Annual turnover rates among aides working in nursing homes are reported to be from about 40 percent to more than 100 percent. In 1998, a survey sponsored by AHCA of 12 nursing home chains found 94-percent turnover among nurse aides. A more recent national study of home health care agencies identified a 28 percent turnover rate among aides in 2000, up from 19 percent in 1994. High rates of turnover may lead to higher provider costs and quality of care problems. Direct provider costs of turnover include recruitment, selection, and training of new staff, overtime, and use of temporary agency staff to fill gaps. Indirect costs associated with turnover include an initial reduction in the efficiency of new staff and a decrease staff morale and group productivity. In nursing homes, for example, high turnover can disrupt the continuity of patient care—that is, aides may lack experience and knowledge of individual residents or clients. When turnover leads to staff shortages, nursing home residents may suffer harm because there remain fewer staff to care for the same number of residents. Job dissatisfaction has been identified as a major factor contributing to the current problems providers report in recruiting and retaining nurses and nurse aides. Among nurses, inadequate staffing, heavy workloads, and the increased use of overtime are frequently cited as key areas of job dissatisfaction. A recent Federation of Nurses and Health Professionals (FNHP) survey found that half of the currently employed RNs surveyed had considered leaving the patient-care field for reasons other than retirement over the past 2 years; of those who considered leaving, 18 percent wanted higher wages, but 56 percent wanted a less stressful and less physically demanding job. Other surveys indicate that while increased wages might encourage nurses to stay at their jobs, money is not generally cited as the primary reason for job dissatisfaction. The FNHP survey found that 55 percent of currently employed RNs were either just somewhat or not satisfied with their facility’s staffing levels, while 43 percent indicated that increased staffing would do the most to improve their jobs. For nurse aides, low wages, few benefits, and difficult working conditions are linked to high turnover. Our analysis of national wage and employment data from the Bureau of Labor Statistics (BLS) indicates that, on average, nurse aides receive lower wages and have fewer benefits than workers generally. In 1999, the national average hourly wage for aides working in nursing homes was $8.29, compared to $9.22 for service workers and $15.29 for all workers. For aides working in home health care agencies, the average hourly wage was $8.67, and for aides working in hospitals, $8.94. Aides working in nursing homes and home health care are more than twice as likely as other workers to be receiving food stamps and Medicaid benefits, and they are much more likely to lack health insurance. One- fourth of aides in nursing homes and one-third of aides in home health care are uninsured compared to 16 percent of all workers. In addition, other studies have found that the physical demands of nurse aide work and other aspects of the environment contribute to retention problems. Nurse aide jobs are physically demanding, often requiring moving patients in and out of bed, long hours of standing and walking, and dealing with patients or residents who may be disoriented or uncooperative. Concern about emerging shortages may increase as the demand for health care services is expected to grow dramatically with the continued aging of the population. In most job categories, health care employment is expected to grow much faster than overall employment, which BLS projects will increase by 14.4 percent from 1998 to 2008. As shown in Table 1, total employment for personal and home care aides is expected to grow by 58 percent, with 567,000 new workers needed to meet the increased demand and replace those who leave the field. Employment of physical therapists is expected to grow by 34 percent, and employment of RNs is projected to grow by almost 22 percent, with 794,000 new RNs expected to be needed by 2008. Demographic pressures will continue to exert significant pressure on both the supply and demand for nurses and nurse aides. A more serious shortage of nurses and nurse aides is expected in the future, as pressures are exerted on both supply and demand. The future demand for these workers is expected to increase dramatically when the baby boomers reach their 60s, 70s, and beyond. Between 2000 and 2030, the population age 65 years and older will double. During that same period the number of women age 25 to 54, who have traditionally formed the core of the nurse and nurse aide workforce, is expected to remain relatively unchanged. Unless more young people choose to go into the nursing profession, the workforce will continue to age. By 2010, approximately 40 percent of nurses will likely be older than 50 years. By 2020, the total number of full time equivalent RNs is projected to have fallen 20 percent below HRSA’s projections of the number of RNs that will be required to meet demand at that time. In addition to concerns about the overall supply of health care professionals, the distribution of available providers is an ongoing public health concern. Many Americans live in areas—including isolated rural areas or inner city neighborhoods—that lack a sufficient number of health care providers. The National Health Service Corps (NHSC) is one safety- net program that directly places primary care physicians and other health professionals in these medically needy areas. The NHSC offers scholarships and educational loan repayments for health care professionals who, in turn, agree to serve in communities that have a shortage of them. Since its establishment in 1970, the NHSC has placed thousands of physicians, nurse practitioners, dentists, and other health care providers in communities that report chronic shortages of health professionals. At the end of fiscal year 2000, the NHSC had 2,376 providers serving in shortage areas. Since the NHSC was last reauthorized in 1990, funding for its scholarship and loan repayment programs has increased nearly 8-fold, from about $11 million in 1990 to around $84 million in 2001. Some have proposed expanding the NHSC or developing similar programs to include additional health care disciplines, such as nurses, pharmacists, and medical laboratory personnel. In considering such possibilities, HHS and the Congress may want to consider our work that has identified several ways in which the NHSC could be improved. These include how the NHSC identifies the need for providers and how it measures that need, how the NHSC placements are coordinated with other programs and with its own placements, and which financing mechanism—scholarships or loan repayments—is a better approach to attract providers to those areas. Over the past 6 years, we have identified numerous problems with the way HHS decides whether an area is a health professional shortage area (HPSA), a designation required for a NHSC placement. In addition to identifying problems with the timeliness and quality of the data used, we found that HHS’ current approach does not count some providers already working in the shortage area. For example, it does not count nonphysicians providing primary care, such as nurse practitioners, and it does not count NHSC providers already practicing there. As a result, the current HPSA system tends to overstate the need for more providers, leading us to question the system’s ability to assist HHS in identifying the universe of need and in prioritizing areas. Recognizing the flaws in the current system, HHS has been working on ways to improve the designation of HPSAs, but the problems have not yet been resolved. After studying the changes needed to improve the HPSA system for nearly a decade, HHS published a proposed rule in the Federal Register in September 1998. The proposed rule generated a large volume of comments and a high level of concern about its potential impact. In June 1999, HHS announced that it would conduct further analyses before proceeding. HHS continues to work on a revised shortage area designation methodology; however, as of July 2001, it did not have a firm date for publishing the proposed new regulations. The controversy surrounding proposed modifications to the HPSA designation system may be due, in large part, to its use by other programs. Originally, it was only used to identify an area as one that could request a provider from the NHSC. Today many federal and state programs— including efforts unaffiliated with HHS—use the HPSA designation in considering program eligibility. These areas want to get and retain the HPSA designation in order to be eligible for such other programs as the Rural Health Clinic program or a 10 percent bonus on Medicare payments for physicians and other providers. The NHSC needs to coordinate its placements with other efforts to attract physicians to needy areas. There are not enough providers to fill all of the vacancies approved for NHSC providers. As a result, underserved communities are frequently turning to another method of obtaining physicians—attracting non-U.S. citizens who have just completed their graduate medical education in the United States. These physicians generally enter the United States under an exchange visitor program, and their visas, called J-1 visas, require them to leave the country when their medical training is done. However, the requirement to leave can be waived if a federal agency or state requests it. A waiver is usually accompanied by a requirement that the physician practice for a specified period in an underserved area. In fiscal year 1999, nearly 40 states requested such waivers. They are joined by several federal agencies—particularly the Department of Agriculture, which wants physicians to practice in rural areas, and the Appalachian Regional Commission, which wants to fill physician needs in Appalachia. Waiver placements have become so numerous that they have outnumbered the placements of NHSC physicians. In September 1999, over 2,000 physicians had waivers and were practicing in or contracted to practice in underserved areas, compared with 1,356 NHSC physicians. In 1999, the number of waiver physicians was large enough to satisfy over one-fourth of the physicians needed to eliminate HPSA designations nationwide. Our follow-up work in 2001 with the federal agencies requesting the waivers and 10 states indicates that these waivers are still frequently used to attract physicians to underserved areas. Although coordinating NHSC placements and waiver placements has the obvious advantage of addressing the needs of as many underserved locations as possible, this coordination has not occurred. In fact, this sizeable domestic placement effort—using waiver physicians to address medical underservice—is rudderless. Even among those states and agencies using the waiver approach, no federal agency has responsibility for ensuring that placement efforts are coordinated. The Administration has recently stated that HHS will enhance coordination between the NHSC and the use of waiver physicians; however HHS does not have a system to take waiver physician placements into account in determining where to put NHSC physicians. While some informal coordination may occur, it remains a fragmented effort with no overall program accountability. As a result, some areas have ended up with more than enough physicians to remove their shortage designations, while needs in other areas have gone unfilled. As the Congress considers reauthorizing the NHSC, it also has the opportunity to address these issues. We believe that the prospects for coordination would be enhanced through congressional direction in two areas. The first is whether waivers should be included as part of an overall federal strategy for addressing underservice. This should include determining the size of the waiver program and establishing how it should be coordinated with other federal programs. The second—applicable if the Congress decides that waivers should be a part of the federal strategy—is designating leadership responsibility for managing the use of waivers as a distinct program. While congressional action could foster a coordinated federal strategy for placement of J-1 waiver physicians, our work has also shown that congressional action could help ensure that NHSC providers assist as many needy areas as possible. We previously reported that at least 22 percent of shortage areas receiving NHSC providers in 1993 received more NHSC providers than needed to lift their provider-to-population ratio to the point at which their HPSA designation could be removed, while 65 percent of shortage areas with NHSC-approved vacancies did not receive any providers at all. Of these latter locations, 143 had unsuccessfully requested a NHSC provider for 3 years or more. In response to our recommendations, the NHSC has subsequently made improvements in its procedures and has substantially cut the number of HPSAs not receiving providers. However, these procedures still allow some HPSAs to receive more than enough providers to remove their shortage designation while others go without. NHSC officials have said that in making placements, they need to weigh not only assisting as many shortage areas as possible, but also factors— such as referral networks, office space, and salary and benefit packages— that can affect the chance that a provider might stay beyond the period of obligated service. Since the practice sites on the NHSC vacancy list had to meet NHSC requirements, including requirements for referral networks and salary and benefits packages, such factors should not be an issue for those practice locations. And while we agree that retention is a laudable goal, the impact of the NHSC’s current practice is unknown, since the NHSC does not routinely track how long NHSC providers are retained at their sites after completing their service obligations. The Congress may want to consider clarifying the extent to which the program should try to meet the minimum needs of as many shortage areas as possible, and the extent to which additional placements should be allowed in an effort to encourage provider retention. Another issue that is fundamental to attracting health care professionals to the NHSC is the allocation of funds between scholarships and educational loan repayments. Under the NHSC scholarship program, students are recruited before or during their health professions training—generally several years before they begin their service obligation. By contrast, under the NHSC loan repayment program, providers are recruited at the time or after they complete their training. The scholarship program provides a set amount of aid per year while in school, while the loan repayment program repays a set amount of student debt for each year of service provided. Under the Public Health Service Act, at least 40 percent of the available funding must be for scholarships. We looked at which financing mechanism works better and found that, for several reasons, the loan repayment program is the better approach in most situations. The loan repayment program costs less. On average, each year of service by a physician under the scholarship program costs the federal government over $43,000 compared with less than $25,000 under loan repayment. A major reason for the difference is the time value of money. Because 7 or more years can elapse between the time that a physician receives a scholarship and the time that the physician begins to practice in an underserved area, the federal government is making an investment for a commitment for service in the future. In the loan repayment program, however, the federal government does not pay until after the service has begun. The difference in average cost per year of service could increase in the future as a result of a recent change in tax law. Loan repayment recipients are more likely to complete their service obligations. This is not surprising when one considers that scholarship recipients enter into their contracts up to 7 or more years before beginning their service obligation, during which time their professional interests and personal circumstances may change. Twelve percent of scholarship recipients between 1980 and 1999 breached their contract to serve,compared to about 3 percent of loan repayment recipients since that program began. Loan repayment recipients are more likely to continue practicing in the underserved community after completing their obligation. How long providers remain at their sites after fulfilling their obligation is not fully clear, because the NHSC does not have a long-term tracking system in place. However, we analyzed data for calendar years 1991 through 1993 and found that 48 percent of loan repayment recipients were still at the same site 1 year after fulfilling their obligation, compared to 27 percent for scholarship recipients. Again, this is not surprising. Because loan repayment recipients do not commit to service until after they have completed training, they are more likely to know what they want to do and where they want to live or practice at the time they make the commitment. These reasons support applying a higher percentage of NHSC funding to loan repayment. The Congress may want to consider eliminating the current requirement that scholarships receive at least 40 percent of the funding. Besides being generally more cost-effective, the loan repayment program allows the NHSC to respond more quickly to changing needs. If demand suddenly increases for a certain type of health professional, the NHSC can recruit graduates right away through loan repayments. By contrast, giving a scholarship means waiting for years for the person to graduate. This is not to say that scholarships should be eliminated. One reason to keep them is that they can potentially do a better job of putting people in sites with the greatest need because scholarship recipients have less latitude in where they can fulfill their service obligation. However, our work indicates that this advantage has not been realized in practice. For NHSC providers beginning practice in 1993-1994, we found no significant difference between scholarship and loan payment recipients in the priority that NHSC assigned to their service locations. This suggests that the scholarship program should be tightened so that it focuses on those areas with critical needs that cannot be met through loan repayment. In this regard, the Congress may want to consider reducing the number of sites that scholarship recipients can choose from, so that the focus of scholarships is clearly on the neediest sites. While placing greater restrictions on service locations could potentially reduce interest in the scholarship program, the program currently has more than six applicants for every scholarship—suggesting that the interest level is high enough to allow for some tightening in the program’s conditions. If that approach should fail, additional incentives to get providers to the neediest areas might need to be explored. Providers’ current difficulty recruiting and retaining health care professionals such as nurses and others could worsen as demand for these workers increases in the future. Current high levels of job dissatisfaction among nurses and nurse aides may also play a crucial role in determining the extent of current and future nursing shortages. Efforts undertaken to improve the workplace environment may both reduce the likelihood of nurses and nurse aides leaving the field and encourage more young people to enter the nursing profession. Nonetheless, demographic forces will continue to widen the gap between the number of people needing care and the nursing staff available to provide care. As a result, the nation will face a caregiver shortage of different dimensions from shortages of the past. More detailed data are needed, however, to delineate the extent and nature of nurse and nurse aide shortages to assist in planning and targeting corrective efforts. Regarding the NHSC, addressing needed program improvements would be beneficial. In particular, better coordination of NHSC placements with waivers for J-1 visa physicians could help more needy areas. In addition, addressing shortfalls in HHS systems for identifying underservice is long overdue. We believe HHS needs to gather more consistent and reliable information on the changing needs for services in underserved communities. Until then, determining whether federal resources are appropriately targeted to communities of greatest need and measuring their impact will remain problematic. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or members of the Subcommittee may have. For further information regarding this testimony, please call Janet Heinrich, Director, Health Care—Public Health Issues, at (202) 512-7119 or Frank Pasquier, Assistant Director, Health Care, at (206) 287-4861. Other individuals who made key contributions to this testimony include Eric Anderson and Kim Yamane.
This testimony discusses (1) the shortage of healthcare workers and (2) the lessons learned by the National Health Service Corps (NHSC) in addressing these shortages. GAO found that problems in recruiting and retaining health care professionals could worsen as demand for these workers increases. High levels of job dissatisfaction among nurses and nurses aides may also play a crucial role in current and future nursing shortages. Efforts to improve the workplace environment may both reduce the likelihood of nurses and nurse aides leaving the field and encourage more young people to enter the nursing profession. Nonetheless, demographic forces will continue to widen the gap between the number of people needing care and the nursing staff available. As a result, the nation will face a caregiver shortage very different from shortages of the past. More detailed data are needed, however, to delineate the extent and nature of nurse and nurse aide shortages to assist in planning and targeting corrective efforts. Better coordination of NHSC placements, with waivers for foreign U.S.-educated physicians, could help more needy areas. In addition, addressing shortfalls in the Department of Health and Human Services (HHS) systems for identifying underservice is long overdue. HHS needs to gather more consistent and reliable information on the changing needs for services in underserved communities. Until then, it will remain difficult to determine whether federal resources are appropriately targeted to communities of greatest need and to measure their impact.
Medicare covers items such as hospital beds, wheelchairs, and blood glucose monitors under its DME benefit because they are specifically included in the Medicare statute’s definition of DME. Other items are covered under the Medicare DME benefit based on CMS’s interpretation of the statute, which does not elaborate on the meaning of “durable.” By regulation, CMS has defined DME as equipment that (1) can withstand repeated use; (2) has an expected lifetime of at least 3 years; (3) is used primarily to serve a medical purpose; (4) is not generally useful in the absence of an illness or injury; and (5) is appropriate for use in the home. Most Medicare beneficiaries enroll in Medicare Part B, which provides coverage for DME if the devices are medically necessary and prescribed by a physician. Medicare beneficiaries typically obtain DME from suppliers, who then submit claims for payment to Medicare on behalf of beneficiaries. CMS contracts with DME MACs to process these claims and ensure proper administration of the DME benefit. Medicare uses three different processes to set the amount it pays for DME. First, the payment amounts for some types of DME are set in a fee schedule that is based on the average charges Medicare allowed during a 12-month period ending June 30, 1987, subject to national floors and ceilings. These historical fee schedule amounts have been updated in some years by a measure of price inflation and a measure of economy- wide productivity. Second, the payments for some DME are set through a competitive bidding program. In that program, qualified DME suppliers with the lowest bids are competitively selected to furnish certain DME product categories to Medicare beneficiaries in designated competitive bidding areas. Third, when CMS classifies a new device on the market as DME, CMS may set the price using the price of a comparable item. If there is no comparable item, then CMS may set the price using the gap- fill methodology. This method takes supplier price lists for the new item and applies a deflation factor to calculate the base-year price—the price for the 12-month period ending June 30, 1987, on which the original fee schedule was based. To then calculate the payment amount, CMS takes the median deflated price and increases it to the current date using the update factors that were applied to the original fee schedule. Similar to the fee schedule, the final price is subject to floors and ceilings. Typically, when a Medicare beneficiary receives DME in conjunction with home health care, the devices are covered, and payments are made, under the DME benefit. However, prior to January 2017, the home health benefit covered disposable negative pressure wound therapy that may substitute for DME as part of the bundled rate CMS pays home health agencies—a single rate for providing treatment and certain related items or services during a 60-day care episode. Beginning in January 2017, the Consolidated Appropriations Act, 2016, unbundled certain disposable negative pressure wound therapy devices that may substitute for DME under the Medicare home health benefit. The act provides for a separate payment for disposable negative pressure wound therapy— meaning it is not part of the bundled payment amount—and sets the reimbursement rate for disposable negative pressure wound therapy equal to the rate used in an outpatient setting, where the device is covered. Furthermore, the act provides separate payment for the disposable negative pressure wound therapy only to beneficiaries who are receiving home health services. We identified a limited number of disposable medical devices that could potentially substitute for DME, based on our literature review and interviews with industry stakeholders. These devices do not necessarily represent a complete list of available disposable devices. Specifically, we identified eight devices that could potentially substitute for DME. These devices fall into existing DME categories used by Medicare—infusion pumps, including insulin pumps; blood glucose monitors; sleep apnea devices; and nebulizers. These disposable DME substitutes vary in life expectancy. For example, some of the substitutes are intended to last a day, while others a year or two. A few of the disposable DME substitutes we identified have been on the market for more than a decade; a couple have become available more recently, in the past 3 to 5 years. We also identified a disposable DME substitute that is currently in development. Infusion pumps. These devices deliver fluids, including medication, into a patient’s body in a controlled manner. In general, a trained technician programs this device, using built-in software, to deliver fluids at specific rates through disposable tubing connected from the device to the patient via a needle. We identified two examples of disposable devices that could potentially substitute for DME—the ambulatory infusion pump and the elastomeric pump. The disposable ambulatory infusion pump has the same characteristics as the DME version, such as being able to deliver fluid at a controlled rate and using a disposable infusion set that is discarded after a single use. However, unlike the durable infusion pump, the ambulatory infusion pump has a life expectancy of 1 year. The disposable elastomeric pump is a single-use device that utilizes a stretchable balloon reservoir that relies on the pressure from the elastic walls of the balloon to deliver a single dose of medication before being discarded. (See fig. 1.) Insulin pumps. These devices are infusion pumps specifically used to deliver insulin to patients with diabetes. The DME version of an insulin pump consists of an insulin reservoir and a pumping mechanism that controls the release of insulin to the patient via a disposable infusion set. We identified two potential disposable substitutes—a completely disposable insulin pump and an insulin pump with both disposable and durable components. The completely disposable insulin pump consists of an adhesive patch containing an insulin reservoir and needle. This patch is attached to the patient’s skin, and a needle is inserted into the skin when a button is pressed, allowing insulin to be delivered throughout the day. This type of insulin pump is intended to last for 24 hours and then be discarded. The other device we identified has both disposable and durable components. This device’s disposable component contains the insulin reservoir, pumping mechanism, and a transmitter sensor in an adhesive patch that is intended to last for 3 days. Its durable component, which is expected to last for 4 years, includes a remote controller that transmits instructions programmed by the patient to the sensor in the patch, which in turn controls the release of insulin via the pumping mechanism. (See fig. 2.) Blood glucose monitor. These devices measure the blood glucose levels in patients. For patients with diabetes, this device provides them with information indicating when an insulin injection is needed. For the DME version of this device, a patient pricks his or her finger, touches the test strip to the blood, and waits for the durable monitor to display a reading on the patient’s blood glucose level. We identified one type of DME substitute. This disposable substitute includes a vial of 50 test strips with a small monitor on the lid. The entire unit is discarded when all of the test strips have been used. (See fig. 3.) Sleep apnea devices. Called continuous positive airway pressure (CPAP) machines, these devices use mild air pressure to keep a patient’s breathing airways open. The DME version of this machine includes a mask or other device that fits over the patient’s nose, and sometimes over the mouth. Straps hold the mask in position and a tube is connected to the machine’s motor, which blows air into the tube. We identified one potential disposable DME substitute on the market and another in development. The first is a disposable valve that fits into a patient’s nose with no mask or associated machine. It is intended to last for one night and then be discarded. The second device is a disposable micro-CPAP still under development. It involves a device that fits into a patient’s nostrils and is intended to last 8 hours and then be discarded. According to the manufacturer, the time limit of 8 hours is linked to the battery life of the device. (See fig. 4.) Nebulizers. These devices allow a patient to receive a drug via inhalation. Nebulizers change liquid medicine into fine droplets (in aerosol or mist form) that are inhaled through a mouthpiece or mask and used to treat conditions, such as asthma. Disposable nebulizers are generally smaller than the DME versions and may last for a year. (See fig. 5.) Over half of the 21 stakeholders we spoke with—including representatives from device manufacturers discussing their specific devices, Medicare beneficiary advocate groups, providers, and insurers— commented on the multiple benefits of substituting DME with disposable devices. The benefits can be categorized into three areas: (1) patient preference and/or improved quality of life, (2) better health outcomes, and (3) potential cost-savings. Specifically, 12 of the 21 stakeholders mentioned patient preference and/or improved quality of life as a benefit of using disposable substitutes. They said that disposable devices are, for example, often lighter and quieter than durable devices. Thus, in some cases, the substitutes may allow patients more freedom of movement and be more discreet. For example, the disposable insulin pumps do not require users to take additional supplies if they leave the house. Further, several stakeholders said that disposable devices are easier to use, such as the elastomeric pump, which one stakeholder explained had fewer opportunities for error. Additionally, 9 of the 21 stakeholders we spoke with said these devices can result in better health outcomes due, in part, to better compliance. For example, one stakeholder for a company that manufactures a disposable DME substitute to treat sleep apnea said the company specifically targeted its device to non-compliant users of the durable CPAP machine. This stakeholder said that while the durable CPAP machine is still considered the “gold standard” for treating obstructive sleep apnea, a significant proportion of patients do not comply with treatment over time. In addition, a representative for a company that manufactures a disposable insulin pump said that some patients are able to reduce the amount of insulin they need after using this device because of increased compliance. This representative explained that because the insulin is being delivered at a more continuous, consistent rate due to better compliance, users are making more efficient use of their insulin injections. Twelve of the 21 stakeholders we spoke with noted different ways that disposable devices may result in potential cost-savings for the Medicare program and beneficiaries than their DME counterparts in some cases. For example, for patients that have acute conditions, such as those needing a course of antibiotics, it could be more financially prudent for Medicare and the beneficiary to use multiple elastomeric pumps for several days to administer the medication rather than pay for a durable pump, which is usually paid for on a monthly basis under Medicare. Also, four stakeholders said that disposable DME substitutes may generate potential savings because they do not have the cleaning and maintenance costs associated with DME, which can be reused. Further, one study we reviewed noted that nurses using elastomeric pumps reported a reduced workload for maintenance and education. Table 1 shows examples of the potential cost-savings associated with using disposable DME substitutes compared to their DME counterparts, as noted by manufacturers of disposable DME substitutes. Despite the potential benefits of disposable substitutes, stakeholders also noted limitations to using these devices regarding health outcomes and potential cost-savings. For example, stakeholders and officials from both DME MACs said there are few studies comparing the effectiveness of disposable substitutes with their DME counterpart. Additionally, stakeholders noted that DME might be more appropriate than disposable DME substitutes in some cases, such as when dosing of medication needs to be precise. For example, two stakeholders said that the elastomeric infusion pump might not be appropriate when the rate of medication delivery needs to be specific, such as with some chemotherapy treatments or for patients with chronic conditions that require long-term treatment. Regarding potential cost-savings, four stakeholders noted that potential cost-savings might not be obtained for all disposable DME substitutes. For example, for patients with chronic conditions that require use of DME for extended periods, it might be more cost-effective to use a durable device rather than disposable substitutes that would need to be replaced regularly. Stakeholders we spoke with—including representatives from device manufacturers, Medicare beneficiary advocate groups, providers, and insurers—cited several market incentives for developing potential disposable DME substitutes. For example, 12 of the 21 stakeholders noted that there is an international market for disposable devices, an increasing demand for some types of devices resulting from a growing patient population, a general movement resulting from advancing technology, and that some disposable devices can be sold as a “cash product”: that is, the product could be sold at relatively low cost without insurance coverage. However, over half of the stakeholders we interviewed said lack of insurance coverage for disposable DME substitutes—particularly Medicare—was a disincentive to developing such products. Specifically, 13 of the 21 stakeholders cited lack of insurance coverage as a disincentive to developing disposable DME substitutes, including representatives from all of the manufacturer organizations and two-thirds of the manufacturers we interviewed. Further, 9 of these 13 stakeholders—including 4 out of 9 manufacturers—specifically cited lack of Medicare benefit coverage as a barrier, with 2 of these 4 manufacturers noting that disposable substitutes do not meet CMS’s 3- year minimum lifetime requirement to be categorized as DME. Eight of the 13 stakeholders also said that lack of Medicare coverage decreased their chances for obtaining benefit coverage from Medicaid and insurers, which often follow Medicare payment policy. Based on our analysis of interviews with these stakeholders, we found limited coverage for the potential disposable substitutes we identified. Specifically, according to the manufacturers we spoke with, the disposable elastomeric infusion pump is covered by Medicaid in some states and is covered by some health insurance plans; the completely disposable insulin pump is covered by Medicare under Part D by some plans, some Medicaid, and other insurance programs, including TRICARE; the insulin pump with disposable and durable components has Medicaid coverage in some states and extensive insurance coverage;and the disposable sleep apnea device has some coverage via insurance and other programs, including the Department of Veterans Affairs. In addition, manufacturers noted that neither the disposable ambulatory infusion pump nor the completely disposable blood glucose monitor is covered by Medicare, Medicaid, or other insurance programs. Stakeholders also raised concerns about how CMS determines whether a device with disposable components meets the definition of DME. As technology has advanced, manufacturers have developed potential substitutes for DME with both durable and disposable components. However, according to CMS officials, in order for a device to be covered by Medicare, the agency must determine that the medically necessary function of the device is performed by a durable component, not a disposable one. Two of the stakeholders we spoke with expressed concerns about CMS’s approach to making durability determinations— and thus benefit coverage determinations—based on whether the durable or disposable component performs the medically necessary function. They said CMS should make such decisions based on the whole device and not its individual parts. If the durable component is essential to the device, then that should be sufficient. CMS has made benefit coverage determinations for at least two devices with disposable and durable components. The first device is a specific continuous glucose monitor, which CMS classified as DME. This device includes an adhesive patch containing a disposable sensor and a wireless transmitter that sends information to a durable electronic receiver that displays a patient’s blood glucose level accurately enough for a patient to make treatment decisions. CMS determined that for this device, the medically necessary function performed is the displaying of the blood glucose level, and therefore this particular continuous glucose monitor could be classified as DME. The second device involves one of the disposable insulin pumps we identified. This particular device includes an adhesive patch containing an insulin reservoir, pumping mechanism, and a transmitter sensor that delivers insulin after receiving instructions transmitted from a programmable durable electronic device. According to the manufacturer of this device, although the durable component meets CMS’s 3-year minimum lifetime requirement, CMS determined that for this device, the medically necessary function is the pumping mechanism that delivers insulin. Therefore, because the pumping mechanism is disposable, CMS determined that this insulin pump is not considered DME. Furthermore, 6 of the 21 stakeholders we spoke with noted that technology is advancing in the area of medical device development. Five of these stakeholders specifically cited CMS’s definition of DME as a disincentive to technological innovation, such as the development of disposable substitutes. As advancing technology results in changes to the functionality of devices, including the development of disposable substitutes, CMS will likely have to consider how its benefit coverage policies will apply to them. CMS has already faced issues accommodating new technology related to smartphone applications; for example, the continuous glucose monitor we described above, which CMS classified as DME, sends information to a durable electronic receiver that displays a patient’s blood glucose level. Alternatively, this information can be displayed using a smartphone application; however, officials from one DME MAC that we spoke with said the receiver would not be covered by Medicare if information was obtained from the smartphone. CMS officials explained that the smartphone application is not considered DME because the smartphone itself is not primarily and customarily used to serve a medical purpose and is useful to an individual in the absence of illness or injury and therefore is not considered a medical device, although it may be used to track medical information. As technology advances, manufacturers may continue to incorporate these advances into devices that have the potential to substitute for DME, and more disposable devices may be developed in the future. Federal internal control standards state that management should identify, analyze, and respond to change, including anticipating and planning for significant changes using a forward-looking process. CMS has already begun facing issues related to advancing technology, such as making policy determinations for devices with both durable and disposable components based on the functionality of different parts of the device. However, Medicare currently does not cover most potential disposable DME substitutes because they do not meet Medicare’s definition of “durable,” which CMS has interpreted to mean withstanding repeated use and having an expected minimum lifetime of 3 years, among other things. Further, CMS officials told us that the agency continues to regard this interpretation of the Medicare statute as appropriate and has not considered the possibility of reexamining it in order to accommodate disposable substitutes. Without such consideration for Medicare coverage, CMS and other insurers that follow Medicare payment policy may not be taking advantage of the possible benefits of these devices. If Medicare coverage were expanded to include disposable DME substitutes, CMS would need to consider issues related to benefit category designation. We identified three possible options that CMS could consider as benefit categories for expanding coverage: (1) using the DME benefit, (2) using the home health benefit, or (3) establishing a new benefit. (See table 2.) Under each scenario, CMS would need to consider its authority to provide for such expanded coverage. In addition, CMS would need to evaluate potential payment methodologies for reimbursement, taking into consideration its responsibility to be a prudent purchaser of medical care. Although we identify several possibilities in this report, this is not intended to be an exhaustive list of potential benefit categories and payment methodologies to be considered if Medicare coverage were expanded to disposable DME substitutes. Because the disposable DME substitutes we identified generally treat the same conditions as some DME items, consideration could be given to expanding eligibility for the DME benefit to cover similar disposable items. Although the CMS regulation interpreting the statutory definition of DME includes a requirement that such equipment can withstand repeated use and have a life expectancy of at least 3 years, CMS officials acknowledged their authority to promulgate rules amending the regulation to potentially shorten the minimum lifetime expectancy. However, it is uncertain whether CMS could reasonably interpret “durable” in such a way that allows for coverage of all of the disposable DME substitutes we identified—many of which are intended for single or short-term use. Thus, providing Medicare coverage to all of these disposable DME substitutes would likely require congressional action. Some stakeholders also noted that using the DME benefit would require several coverage decisions to be made, either through CMS’s national coverage determination process or through the local process conducted by the DME MACs. For example, one decision point three stakeholders noted is whether Medicare would cover a disposable DME substitute as a potential replacement for its durable counterpart in all cases or only under certain circumstances. Another consideration mentioned by another three stakeholders is whether there would be limits to the benefit, such as whether Medicare would cover a disposable DME substitute only for a certain number of months. Limiting coverage in such a way could encourage the use of disposable DME substitutes for acute conditions over chronic conditions. Three stakeholders suggested that some disposable DME substitutes, such as in the case of elastomeric infusion pumps, are more likely to result in cost-savings as compared to their durable counterparts when used to treat acute conditions. However, limiting coverage of disposable DME substitutes to a certain length of time could impede beneficiary access to their preferred medical equipment. We identified two approaches CMS could consider within the DME benefit for setting reimbursement rates for disposable DME substitutes. Specifically, payments for disposable DME substitutes could be based on the payment rates of their DME counterparts, or they could be treated separately, making use of the typical procedures for establishing payment rates for new DME. For the first approach, CMS could set the reimbursement rate for a disposable DME substitute at the same price as the DME counterpart, or—recognizing that disposable devices may be less costly than DME—at a reduced percentage of the rate for the DME counterpart. However, we have previously reported that the historical charges on which the fee schedule rates are based are outdated and do not reflect current costs. As a result, the reimbursement rates increase costs to both the federal government and Medicare beneficiaries. In addition, four stakeholders raised concerns about this type of “one-size- fits-all” approach to disposable DME substitutes. For example, one stakeholder noted that disposable DME substitutes could vary widely in cost, quality, and the length of time a beneficiary requires the device. The cost of certain disposable DME substitutes may not be similar to the cost of their DME counterparts. Therefore, basing the rates for all disposable devices on the DME rates might result in significant over- or underpayments. Using the second approach, separate payment rates could be established for disposable DME substitutes using one of the two existing DME payment methodologies: the fee schedule based on historical charges or the competitive bidding program. Setting payment rates via the historical fee schedule would likely entail using the gap-fill method because the disposable DME substitutes we identified did not exist at the time the historical charges for most DME were set. In this case, the gap-fill method would rely on taking current supplier prices and resetting them to the 12-month period ending June 30, 1987, used in the original fee schedule by a deflation factor based on the consumer price index for all urban consumers, followed by re-inflating the price using an inflation factor limited to the years in which a DME inflation update was provided. One stakeholder expressed concern regarding the gap-fill method, though, stating that the process results in pricing that does not accurately represent market prices. In addition, as with the fee schedule amounts for DME, the payments for disposable DME substitutes could become outdated over time and increase costs to both the federal government and Medicare beneficiaries. A competitive bidding program that would reflect market prices could be established for disposable DME substitutes, as CMS has done for certain DME items. We previously found that, among other things, the competitive bidding method has generally led to reduced payments for those DME included in the program. We have also previously reported that CMS’s monitoring of the competitive bidding program indicated that beneficiary DME access and satisfaction had not been affected, but noted some stakeholders’ concerns, such as difficulty locating a contract supplier. Disposable DME substitutes could potentially be covered under Medicare’s home health benefit. There is precedent for such coverage: disposable negative pressure wound therapy devices are covered under the home health benefit. Previously, they were covered as part of the bundled rate, but the Consolidated Appropriations Act, 2016, required certain disposable negative pressure wound therapy devices to be paid separately under Medicare home health services. Coverage under the home health benefit could potentially be expanded to include other types of disposable DME substitutes that we identified. However, coverage under this benefit would only be applicable in cases where a beneficiary is receiving Medicare home health services, which excludes beneficiaries who are not homebound and do not have a need for skilled care. Furthermore, because disposable DME substitutes are not among the existing covered services for home health, covering these disposable devices under this benefit would likely require legislation. We identified two options for setting payment rates within the home health benefit: as a separate payment or as part of bundled payments. Under the home health benefit, a separate payment amount could be set for disposable DME substitutes. For the disposable negative pressure wound therapy device, Congress established a separate payment amount equal to the amount paid under the outpatient payment system. However, according to manufacturers we spoke with, the disposable DME substitutes we identified are not currently covered under the outpatient benefit, and therefore no such rates exist for Congress to do the same for these devices. Additionally, we have previously found that generous separate payments can incentivize a specific medical practice even if it is not always entirely warranted. For example, we found that the separate payments for injectable drugs used in treating end-stage renal disease exceeded the costs of acquiring them and provided an incentive to use more of the drugs than necessary. Alternatively, disposable DME substitutes could be included in the home health bundle. CMS sets the bundled payment’s national average base amount as the amount that would be paid for a typical home health patient residing in an average market. Including disposable DME substitutes in the bundle might subsequently mean recalculating the base rate. However, as one stakeholder noted, including disposable DME substitutes in the bundle would mean they are treated differently than their DME counterparts, which are paid for as separate payments under the DME benefit for beneficiaries receiving home health services. Literature and two of our stakeholder interviews noted that bundled payment can incentivize using the device that the provider has determined to be more cost-effective, but if DME and disposable DME substitutes were paid differently, providers might not have an incentive to choose the more cost- effective device. A new benefit category could be established to specifically cover the disposable DME substitutes we identified, or—more broadly—to cover a category of disposable devices that could potentially substitute for DME, including those not yet on the market. Only Congress has the authority to create new Medicare benefit categories. If Congress created such a benefit, it could establish a new payment methodology, use one of the payment methodologies discussed in this report, or use an existing payment methodology we have not discussed here. For example, one stakeholder suggested emulating a payment mechanism established under the Protecting Access to Medicare Act of 2014 for clinical laboratory tests: beginning in 2018, the Medicare rate would reflect private payer rates for these tests. Regardless of the methodology established, the rate would ideally be set to account for the costs of relatively efficient providers of the devices, and provide sufficient access to the devices for beneficiaries. Some disposable medical devices may have the potential to substitute for DME and may offer advantages in some cases, such as cost-savings and better health outcomes. While a few of these disposable DME substitutes have been on the market for several years, a couple we identified are more recent. As technology advances, more manufacturers could develop new disposable devices, and stakeholders we interviewed identified incentives to do so, such as a growing patient population. However, Medicare currently does not cover most disposable DME substitutes because they do not meet Medicare’s definition of durability. CMS officials stated that they continue to regard this definition as appropriate and have not considered the possibility of extending DME coverage to these substitutes. As we noted, there may be ways to cover disposable DME substitutes other than with the DME benefit and its associated payment methodologies, such as the home health benefit. According to federal internal control standards, management should anticipate and plan for significant changes using a forward-looking process. Without considering whether disposable DME substitutes should be covered by Medicare, CMS and other insurers that follow Medicare payment policy may not recognize advances in technology that may provide potential cost-savings and better health outcomes. We recommend that the Administrator of CMS evaluate the possible costs and savings of using disposable devices that could potentially substitute for DME, including options for benefit categories and payment methodologies that could be used to cover these substitutes, and, if appropriate, seek legislative authority to cover these devices. We provided a draft of this report to HHS for comment. HHS’s written comments are reproduced in appendix II. HHS also provided technical comments, which we incorporated as appropriate. In its written comments, although HHS did not state whether it agreed or disagreed with the recommendation, the agency stated that it is premature to conduct the study we recommended of the possible costs and savings of using disposable devices that could potentially substitute for DME. HHS emphasized that only Congress has the authority to create new benefit categories and payment systems for potential disposable DME substitutes and that additional information is needed on whether disposable devices are appropriate clinical substitutes before conducting an analysis of possible costs and savings. We agree, and our report states, that CMS may lack the authority to interpret “durable” in a way that allows for coverage of all the disposable DME substitutes we identified and that congressional action may be required for Medicare to cover some of these devices. However, without conducting a study to identify the potential costs and benefits of covering such devices, CMS will lack the necessary clinical and cost information to determine if it would be beneficial to reassess current statutory and regulatory coverage rules. In other instances, CMS has used the national and local coverage determination processes to establish clinically based policies related to DME. Moreover, CMS—which oversees the implementation of complex Medicare payment rules—is uniquely positioned to consider the extent to which coverage of any clinically appropriate substitutes would benefit the federal government and beneficiaries. For these reasons, we disagree with HHS that an evaluation of potential disposable DME substitutes is premature. As we state in the report, management should anticipate and plan for significant changes using a forward-looking process, according to federal internal control standards. The study we recommended is such a forward-looking process. Unless it is undertaken, neither HHS nor Congress will have the information it needs to reassess whether the current statutory and regulatory framework makes good clinical and fiscal sense. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the contact named above, Martin T. Gahart, Assistant Director; Hannah Marston Minter, Analyst-in-Charge; George Bogart; Ricky Harrison; Gay Hee Lee; Elizabeth T. Morrison; and Alison Smith made key contributions to this report.
In 2015, Medicare spent $6.7 billion for DME. CMS's definition of DME generally precludes potential disposable DME substitutes from coverage. Congress included a provision in law for GAO to review the potential role of disposable medical devices as substitutes for DME. This report examines (1) potential disposable DME substitutes and their possible benefits and limitations; (2) the incentives and disincentives stakeholders identified for developing these substitutes, including the possible influence of health insurance coverage; and (3) issues related to benefit category designation—including legal authority and potential payment methodologies—if Medicare coverage were expanded to include disposable DME substitutes. GAO reviewed agency documents and literature on disposable DME substitutes and Medicare payment policy; interviewed CMS officials; and interviewed various stakeholders, including representatives of device manufacturers, beneficiary advocates, health care providers, and insurers, for their perspectives. While disposable medical devices are generally not covered by Medicare, GAO identified eight that could potentially substitute for durable medical equipment (DME) items that are covered. These disposable DME substitutes fall into existing Medicare DME categories—infusion pumps, including insulin pumps; blood glucose monitors; sleep apnea devices; and nebulizers. Stakeholders GAO interviewed identified multiple benefits of using disposable substitutes, such as better health outcomes and potential cost-savings. However, they also cited factors that limit their use, including that these substitutes may not lead to cost-savings in all cases. Stakeholders identified several market incentives, such as a growing demand, as reasons to develop disposable DME substitutes, but mostly cited lack of coverage by Medicare as a disincentive to development. Disposable DME substitutes are generally precluded from Medicare coverage under the DME benefit because they do not meet the Centers for Medicare & Medicaid Services' (CMS) regulatory definition of “durable”—able to withstand repeated use, with an expected lifetime of at least 3 years. Stakeholders noted this also decreases their chances of obtaining coverage from other insurers, which may follow Medicare payment policy. Some stakeholders noted that CMS's DME definition is a disincentive to technological innovation, and the agency has already faced challenges making coverage decisions with some devices. According to federal internal control standards, management should anticipate and plan for significant changes using a forward-looking process, but CMS officials said the agency has not considered the possibility of reexamining its definition. As a result, the agency may not be taking advantage of the potential benefits of these devices. If Medicare coverage were expanded to include disposable DME substitutes, CMS would need to consider issues related to benefit category designation. GAO identified three possible options for covering disposable DME substitutes: an expansion of the current DME benefit, an expansion of the current home health benefit, or establishment of a new benefit category. The table lists the options GAO identified, which are not exhaustive. CMS would also need to consider its authority to provide for expanded coverage and evaluate potential reimbursement options. GAO recommends that CMS, within the Department of Health and Human Services (HHS), evaluate the possible costs and savings of using disposable devices as substitutes for DME, and, if appropriate, seek legislative authority to cover them. HHS stated that such an evaluation was premature. However, GAO continues to believe an evaluation is needed to help HHS anticipate and plan for significant changes using a forward-looking process.
Decades of conflict have left Afghanistan a poor nation with high illiteracy, weak government institutions, and a high level of corruption. According to Transparency International’s index of perceived corruption, Afghanistan is tied with Burma as the world’s second most corrupt nation. The United States has allocated about $56 billion for fiscal years 2002 to 2010 to reconstruct Afghanistan, as shown in table 1. The United States allocated nearly half of these funds—about $27 billion—in fiscal years 2009 and 2010 alone. For fiscal year 2011, DOD has allocated more than $12.6 billion in additional funds for Afghan reconstruction. While the allocation of fiscal year 2011 State and USAID funds for Afghanistan had not been finalized as of June 2011, State’s fiscal year 2011 budget request included more than $5 billion for Afghan international affairs programs and operations. In 2009, the executive branch adopted the Integrated Civilian-Military Campaign Plan to guide U.S. reconstruction activities in Afghanistan. The plan, which is currently being updated, categorizes reconstruction activities in terms of three overarching lines of effort—development, governance, and security. State officials have informed us that U.S. agencies do not track Afghan reconstruction funds by the lines of effort. U.S. agencies have used various means to implement Afghan reconstruction projects with these funds. In some cases, they have hired contractors and nongovernment organizations. In other cases, U.S. reconstruction funds have been provided directly to the Afghan government’s national budget to be used by Afghan ministries and other government entities. In 2010, the United States announced plans to increase direct assistance to Afghanistan. In January 2010, the Secretary of State announced that the United States would increase direct assistance to the Afghan government to help Afghan ministries and other government entities build their capacity to manage funds. At two international conferences in 2010, the United States and other donors pledged to provide half or more of their development aid in the form of direct assistance to the Afghan government within 2 years, contingent on Afghan actions to reduce corruption and strengthen public financial management capacity. In February 2011, DOD formally authorized direct contributions of DOD funds to two Afghan security ministries to build their capacity and support Afghan security forces. USAID awards direct assistance to Afghanistan through two means. It awards direct assistance to several Afghan government entities through bilateral agreements overseen by its mission in Afghanistan. These entities include the Independent Directorate for Local Governance and the ministries of Agriculture, Irrigation, and Livestock; Communications and Information Technology; Finance; Public Health; and Transport and Civil Aviation. Some of the bilateral agreements finance Afghan government procurement of goods and services, while others fund a range of other government expenses and activities, including operating costs, salaries, agricultural development programs, and infrastructure projects. USAID also provides direct assistance by awarding funds to the multilateral World Bank-administered Afghanistan Reconstruction Trust Fund (ARTF). ARTF was established in 2002 as a vehicle for donors to pool resources and coordinate support for Afghanistan’s reconstruction. As of April 2011, 32 donors had contributed about $4.3 billion to ARTF. ARTF provides these funds through the Afghan government national budget to finance the government’s recurrent operating costs (e.g., wages for civil servants, operations and maintenance costs) and national development programs. DOD provides direct assistance bilaterally to Afghanistan’s Ministry of Defense (MOD) and Ministry of Interior (MOI) through contributions of funds overseen by DOD’s Combined Security Transition Command– Afghanistan (CSTC-A). According to DOD guidance, these contributions are used to procure food, salaries, goods, services, and minor construction in direct support of the Afghan National Army (ANA) and the Afghan National Police (ANP). CSTC-A also contributes funds to the multilateral UNDP-administered Law and Order Trust Fund for Afghanistan (LOTFA), which receives contributions from several donor nations. Most LOTFA funds are used to provide salaries to ANP personnel. The United States more than tripled its awards and contributions of USAID and DOD direct assistance funds to the Afghan government in fiscal year 2010 compared with fiscal year 2009 (see fig. 1). In fiscal year 2010, most of the direct assistance funds (about 71 percent) were awarded by USAID for activities related to development and governance, either bilaterally (about 6 percent) or through preferenced contributions to ARTF (about 65 percent), as shown in figure 2. For example, USAID has contributed funding to a community development and local governance program that is being implemented in all of Afghanistan’s 34 provinces through ARTF. The remainder was contributed by DOD for security assistance, either bilaterally to MOD and MOI or through LOTFA. As shown in table 2, USAID awards of direct assistance to Afghanistan increased from over $470 million in fiscal year 2009 to more than $1.4 billion in fiscal year 2010. These awards included a $1.3 billion grant to ARTF, more than triple what it awarded to ARTF in 2009. USAID may obligate and disburse funds awarded to an Afghan entity or trust fund over multiple years, depending on the agreement’s terms. DOD direct assistance to MOD and MOI, including contributions to LOTFA, grew from about $195 million in fiscal year 2009 to about $576 million in fiscal year 2010. DOD contributions to LOTFA more than doubled from about $68 million in fiscal year 2009 to about $149 million in fiscal year 2010. Risk assessments and internal controls to mitigate identified risks are key elements of an internal control framework to provide reasonable assurance that agency assets are safeguarded against fraud, waste, abuse, and mismanagement. USAID conducted preaward risk assessments in most cases. However, we found that USAID’s policies for assessing direct assistance risks do not require preaward risk assessments in all cases. USAID has not updated its policies to reflect the USAID Administrator’s July 2010 commitment to Congress that USAID would assess all Afghan public institutions before providing them with direct assistance. We found that in August 2010 and January 2011, USAID did not complete preaward risk assessments before awarding funds to two Afghan government entities. USAID has established various financial and other controls in its bilateral direct assistance agreements with ministries that go beyond what is required by its policies. However, it has not always ensured compliance with those controls. DOD personnel in Afghanistan have assessed the risk of providing funds to MOD and MOI through quarterly reviews of each ministry’s capacity. DOD established formal procedures on risk assessment for Afghan direct assistance in June 2011 after we informed DOD officials that DOD lacked such procedures. DOD officials also stated that they review records of MOD and MOI expenditures to assess whether funds have been used as intended, as required by DOD policies established in February 2011. USAID mission staff have complied with USAID risk assessment policies for awarding bilateral direct assistance funds to finance Afghan procurement activities under what USAID refers to as a host country contract. USAID policies, as outlined in its Automated Directives System (ADS), require USAID staff to conduct a preaward risk assessment for a host government entity if the entity is to use the award to procure goods and services. Specifically, staff are required under ADS to (1) assess the entity’s procurement system and (2) obtain the Mission Director’s certification of the entity’s capability to undertake the procurement. Of USAID’s eight bilateral direct assistance agreements, we identified two involving the financing of Afghan procurement activities. In both cases, we found that USAID mission staff, in compliance with ADS, had (1) assessed the financial and procurement management capabilities of the Afghan recipients (the Ministry of Communications and Information Technology and the Ministry of Public Health) before awarding funds (see table 3) and (2) obtained the required certifications. Of six bilateral direct assistance agreements that did not involve financing Afghan government procurement activities, we found that USAID had completed such assessments before awarding funds in four cases (see table 3). Although USAID did not conduct preaward assessments in two cases, it was in compliance with its risk assessment policies. Those policies state that USAID staff “should” assess the capacity (e.g., financial management, procurement, and personnel management capacity) of prospective recipients in cases that do not involve financing Afghan government procurement activities. USAID has not updated its risk assessment policies to reflect its Administrator’s commitment that USAID would assess the capabilities of Afghan government recipients in all cases before awarding them direct assistance funds. On July 28, 2010, USAID’s Administrator responded to concerns expressed by Members of the House Appropriations Committee’s Subcommittee on State, Foreign Operations, and Related Programs regarding corruption and weak government capacity in Afghanistan by committing that USAID would not proceed with direct assistance to an Afghan public institution until USAID had ensured that the institution had an accountable organizational structure and sound financial management capabilities and met USAID standards. State’s Office of the Special Representative for Afghanistan and Pakistan made a similar commitment in January 2010, when it stated that “to receive direct assistance, Afghan ministries must be certified as having improved accountability and transparency.” However, we found that current USAID policy for direct assistance not involving the financing of Afghan government procurement activities does not require USAID to assess a prospective recipient’s capacity to implement a proposed activity. We also found that following the Administrator’s July 2010 commitment, USAID awarded direct assistance funds to two Afghan government recipients before completing risk assessments. As shown in table 3, USAID signed a $40 million agreement with the Independent Directorate for Local Governance in August 2010, 5 months before completing an assessment of that entity. It also signed a $6 million bilateral direct assistance agreement with the Ministry of Transport and Civil Aviation in January 2011, 2 months before completing an assessment of the ministry. The completed risk assessments identified areas of high risk in both entities. For example, the Ministry of Transport and Civil Aviation was assessed as “high risk” in the four core function areas covered by the assessment—control environment, financial management and accounting, compliance with applicable laws and regulations, and accountability environment. Similarly, the Independent Directorate for Local Governance was assessed as “high risk” in 5 of 14 areas covered, including financial management and procurement. USAID officials told us that USAID awarded these funds before completing the risk assessments because the projects were urgently needed. USAID has established various financial and other controls in its bilateral direct assistance agreements, although USAID policies do not establish minimum standard conditions for such agreements, according to USAID officials. Shown in table 4 are selected examples of financial controls USAID has established within its bilateral direct assistance agreements. USAID also required Afghan government recipients to provide documentation demonstrating their compliance with the selected controls. As shown in table 4, in each applicable case, USAID ensured compliance with the selected controls. In two cases, USAID also hired contractors to help control risks identified in preaward assessments. For example, USAID’s assessment of the Ministry of Agriculture, Irrigation, and Livestock (MAIL) determined that MAIL would not be able to independently manage and account for direct assistance funds. As a result, USAID awarded a $49.1 million contract to a U.S.-based firm to establish a unit to manage a USAID-funded agriculture development fund, to transition that unit to local control within 4 years, and to provide technical assistance. Similarly, USAID’s October 2007 assessment of the Ministry of Public Health noted concerns that the ministry would continue needing technical assistance to effectively and efficiently manage donor funds. As a result, USAID amended an existing contract to an international nonprofit organization to improve the capacity of the ministry at the central level and in target provinces. USAID has also established procurement-specific controls in its bilateral direct assistance agreements with the Ministry of Communications and Information Technology and the Ministry of Public Health. These agreements provide funds to Afghan ministries to enter into contracts for goods and services and require USAID to monitor and approve certain steps of the procurement process for contracts over $250,000, as applicable. While USAID generally complied with this requirement, USAID mission officials could not provide us with documentation showing that USAID had done so in all cases, as shown in table 5. Specifically, USAID mission officials either did not approve or document that they had approved prior to execution any of 6 contracts that the Ministry of Communications and Information Technology entered into (in table 5, see step 7 of the procurement process). In addition, USAID mission officials told us that USAID did not approve any of the ministry’s 6 prefinancing contract documents (step 8 of the procurement process). USAID stated that no clearance or approval was provided because the final signed documents did not need concurrence. Similarly, USAID documented only three instances in which it had approved any of the Ministry of Public Health’s 19 prefinancing contract documents. USAID also did not conduct follow- up reviews of the ministry to ensure its compliance with USAID contracting and financial management requirements, as called for in the assistance agreement. USAID has taken steps to ensure that bilateral direct assistance awards are audited. USAID policy requires audits of recipients, including host government entities, that expend $300,000 or more in USAID awards during a fiscal year. USAID has asserted its right to audit Afghan recipient use of funds in all of its bilateral direct assistance agreements, including those involving procurement. According to USAID mission officials, USAID has contracted with audit firms to initiate audits of three Afghan ministries (the Ministries of Finance, Communications and Information Technology, and Public Health) that disbursed a total of $28.8 million in USAID awards in fiscal year 2010. CSTC-A has recently established procedures that require CSTC-A personnel to assess the risks of direct assistance in advance of providing funds to Afghan ministries. On June 12, 2011, CSTC-A established standard operating procedures for direct assistance, as required under DOD guidance issued on February 4, 2011. The CSTC-A procedures identify risk assessment as the first of four steps CSTC-A personnel must take before the direct contribution of the funds. CSTC-A adopted these procedures after we informed DOD officials that DOD lacked risk assessment guidance for bilateral direct assistance. The CSTC-A procedures specify that the primary method CSTC-A is to use to assess risks is the Ministerial Development Board. The board oversees CSTC-A efforts to develop the capacity of MOD and MOI. CSTC-A officials informed us in January and February 2011 that CSTC-A has been using this method to assess the capacity of MOD and MOI in connection with direct assistance. They stated that CSTC-A advisers embedded in MOD and MOI participate in quarterly assessments of MOD and MOI progress toward meeting defined capability objectives. For example, CSTC-A assesses MOI development in 26 different areas, including finance and budget, procurement, and personnel management. The assessments focus on the extent to which the ministries are capable of achieving the objectives and identify specific strengths and weaknesses. For example, in April 2011, CSTC-A assessed the MOD budget and finance section responsible for ANA pay support operations. CSTC-A determined that its strengths included experienced staff and a willingness to tackle corruption and its weakness was a lack of budget authority. DOD’s February 4, 2011, guidance requires CSTC-A to establish financial controls for its contributions to MOD and MOI.  The guidance specifically requires CSTC-A to conduct quarterly reconciliations of CSTC-A advance payments to MOD and MOI against records of MOD and MOI expenditures. CSTC-A officials informed us that CSTC-A reconciles CSTC-A advance contributions against MOD and MOI expenditure data drawn from the Ministry of Finance (MOF) Afghan Financial Management Information System and has adjusted future contributions accordingly. DOD officials acknowledged the reconciliation process does not address the extent to which aggregated line items from the system may contain inaccurate ANA and ANP payroll data.  The guidance also requires CSTC-A to monitor MOD and MOI use of the contributed funds down to the subcontractor level. CSTC-A officials informed us that they would be unable to monitor MOD and MOI subcontractors, as called for in the DOD guidance. They stated that the risk of sending personnel to vet MOD and MOI subcontractors in certain regions of Afghanistan was too great. In addition, CSTC-A advisers monitor MOD and MOI use of U.S. funds, according to CSTC-A officials. CSTC-A informed us that it has embedded about 500 advisers in MOD and MOI, including 6 in MOD financial offices and 13 in MOI finance and budget offices. Also, CSTC-A personnel participate in internal control teams that review ANA pay processes in a different ANA corps every month. USAID and DOD generally rely on the World Bank and UNDP to ensure accountability over U.S. direct assistance provided multilaterally through ARTF and LOTFA. USAID, however, has not consistently complied with its multilateral trust fund risk assessment policies in awarding funds to ARTF. For example, in March 2010, USAID did not conduct a risk assessment before awarding an additional $1.3 billion to the World Bank for ARTF. During our review, DOD established procedures in June 2011 requiring that it assess risks before contributing funds to LOTFA. World Bank and UNDP controls over ARTF and LOTFA funds include the use of hired monitoring agents to help ensure that ministries use donor contributions as intended. However, these controls face challenges posed by security conditions and by weaknesses in Afghan ministries. For example, the ARTF monitoring agent resigned in June 2011 due to security concerns, while weaknesses in MOI’s systems for paying wages to Afghan police challenge UNDP efforts to ensure that MOI is using LOTFA funds as intended. USAID has not consistently followed its own policies for assessing the risk associated with its awards to the World Bank for ARTF, which have increased from $5 million in 2002 to a total of more than $2 billion. When the grant agreement and subsequent modifications between the World Bank and USAID were signed, USAID policies on grants to public international organizations (PIO), such as the World Bank, called for preaward determinations that the PIO was a responsible grantee. This requirement applied to both the original grant and to any subsequent modification of the grant that significantly increased the amount of the award. Under USAID policy, the preaward determination should have addressed factors such as whether the grantee’s program was an effective and efficient way to achieve a USAID objective and whether there were any reasons to consider the grantee to be “not responsible.” USAID could not provide us with a preaward responsibility determination of the World Bank prior to awarding ARTF an initial grant of $5 million in 2002. While USAID did not follow its policies to complete a preaward determination for its initial $5 million grant, it determined, after it signed the agreement, that (1) ARTF had a comprehensive system in place for managing the funds and (2) the World Bank had a long history in managing multidonor pooled funding mechanisms, in an approved 2002 memorandum requesting a deviation from incorporating its then- mandatory standard provisions into its ARTF grant agreement. However, USAID did not conduct preaward determinations for 16 of the 21 subsequent modifications to the grant. For the instance in which USAID increased the value of the award by $1.3 billion in March 2010, USAID provided us with an unsigned and undated memorandum that applied to a $15 million obligation. For the 5 preaward responsibility determinations that were conducted, USAID documentation stated that the World Bank was a responsible grantee but did not document the analysis used to support the determinations. In April 2011, in response to GAO recommendations and our follow-up meetings, USAID revised and expanded its guidance on how to conduct preaward determinations for all PIOs. The revised guidance continues to require the USAID officer in charge of the agreement to document preaward responsibility determinations for PIOs. Under the new guidance, a group of USAID headquarters officials will first place the PIO, such as the World Bank, into one of three categories, based on USAID’s experience with the PIO and its determination of the PIO’s level of responsibility. The revised guidance requires USAID to consider several factors in determining a PIO’s level of responsibility, including the quality of its past performance, its most recent audited financial statements, and any other information to fully assess whether it has the necessary management competence to plan and carry out the intended activity. After a responsibility determination has been made, the USAID officer in charge of the agreement must still document the determination before making an award. USAID’s policy is to generally rely on a PIO’s financial management, procurement, and audit policies and procedures. The World Bank has established financial controls over donor contributions to ARTF. For example, the World Bank hired a monitoring agent responsible for monitoring the eligibility of salaries and other recurrent expenditures that the Afghan government submits for reimbursement against ARTF criteria. According to the World Bank, it conducts advance reviews of ARTF development procurement contracts. The amount of prior review of Afghan government procurement by the bank varies according to the method of selection or procurement, the type of good or service being procured, and the bank’s assessment of project risk, according to the bank. The World Bank also reports that it assesses projects semi- annually as part of regular World Bank supervision as per World Bank policies, procedures and guidelines based in part on project visits. Also, the bank informed us that it manages and administers ARTF according to a set of World Bank global policies and procedures. ARTF is part of a single audit of all trust funds administered by the bank, and includes both an annual management assertion over internal controls surrounding the preparation of trust fund financial reports and a combined financial statement for all modified cash basis trust funds. Also, the Afghan government’s external audit agency, the Control and Audit Office (CAO), conducts annual audits of ARTF-financed projects with the technical assistance of a firm of international accountants that are funded by the World Bank. As part of its supervision of ARTF- financed activities, a World Bank financial management team reviews the CAO audit reports, discusses its observations with government counterparts, and follows up to ensure resolution of any outstanding issues. Following the government’s annual submission of CAO audit reports to the World Bank, the bank sends a letter to the donors summarizing the timeliness and results of the CAO’s annual audits. The CAO’s audits of 16 ARTF development projects for the Afghan fiscal year that began in March 2009 had 16 unqualified (or “clean”) results. The World Bank shares CAO audit and monitoring agent reports with donors when requested. World Bank financial controls over ARTF face challenges posed by oversight entities’ limited movement in Afghanistan’s high-threat environment and the limited capacity of Afghan ministries to meet agreed- upon procurement and financial management standards, as shown in these examples.  Security conditions prevented CAO auditors from visiting most of the provinces where ARTF funds were being spent. They were able to conduct audit tests in 10 of Afghanistan’s 34 provinces from March 2009 to March 2010 and issued a qualified opinion of the financial statements of ARTF’s salary and other recurrent expenditures as a result.  According to the Department of the Treasury (Treasury), the ARTF monitoring agent recently resigned from its contract with the World Bank due to security concerns. USAID stated in July 2011 that the monitoring agent informed the bank in May 2011 that its contract should not be extended due to security concerns. The World Bank reports that it is seeking a new monitoring agent, has received many expressions of interest, and does not anticipate a gap in monitoring.  Previously, security concerns prevented the ARTF monitoring agent from physically verifying ARTF salary and other recurrent expenditures outside of Kabul province from March 2009 through March 2010. The World Bank had required the monitoring agent or its subcontractor to visit sites in at least 12 provinces to verify expenditures made during the Afghan fiscal year that began in March 2010.  The CAO lacks qualified auditors and faces other capacity restraints, according to the Special Inspector General for Afghanistan Reconstruction (SIGAR) and USAID. However, it uses international advisers and contracted auditors, funded by the World Bank, to help ensure that its audits of ARTF comply with international auditing standards. The World Bank recently reported that the overall timeliness of the CAO audits have been improving since 2006.  The World Bank and donors have expressed concern over the level of ineligible expenditures submitted by the Afghan government for reimbursement. While ineligible expenditures are not reimbursed, the bank considers the level of ineligible expenditures to be an indicator of weaknesses in the Afghan government’s ability to meet agreed-upon procurement and financial management standards. The ARTF monitoring agent has questioned whether Afghan government civil servants have the experience and knowledge necessary to perform transactions in a manner eligible for reimbursement and whether ministries’ internal procedures fully reflect Afghan government laws and regulations. Partly as a result of recommendations from a 2008 independent evaluation of ARTF by a Norwegian-based firm and discussions with donors, the World Bank is currently seeking to revise its 2002 grant agreements with donors to reflect its efforts to strengthen ARTF governance. According to the World Bank, the recommended changes include clarifying and strengthening donors’ oversight roles and responsibilities over ARTF. In response to our inquiries, the World Bank stated in April 2011 that it is considering incorporating its current standard provisions, applicable to multidonor trust funds, in the amended grant agreements with donors. These provisions would allow donor countries greater access to accounting and financial records and information. Under the current agreement with all donors, the World Bank provides donors with periodic reports, such as quarterly status reports, and an annual management assertion together with an attestation from the bank’s external auditors on the satisfactory performance of the bank’s procedures and controls. During our review on June 12, 2011, CSTC-A issued new procedures for direct assistance that require CSTC-A to conduct precontribution risk assessments before contributing funds to LOTFA. CSTC-A staff had previously informed us in February 2011 that CSTC-A had not assessed the risks of providing funds to LOTFA. Instead, CSTC-A had regularly assessed the capabilities and weaknesses of MOI. For example, CSTC-A assessed MOI’s finance and budget section in March 2011 and determined that while the section’s strengths included a responsiveness to pay issues, its weaknesses included a lack of well-trained staff and an unwillingness to change. CSTC-A generally relies on UNDP’s financial controls to ensure the accountability of funds it has contributed to LOTFA. CSTC-A contribution letters to LOTFA request that UNDP provide CSTC-A with quarterly reports, which UNDP posts on its Web site. CSTC-A officials informed us that CSTC-A reconciles its contributions to LOTFA annually. UNDP’s LOTFA project manager in Kabul informed us that UNDP makes copies of audits of LOTFA available upon request. CSTC-A officials told us they have not requested LOTFA audits. UNDP has established financial controls over the funds it provides to MOI for ANP expenses. It has stated that it reconciles its contributions with MOF records of MOI expenses on a quarterly and annual basis. UNDP recently reported that it deducted $17.6 million from its contribution to MOI as a result of ineligible expenses identified during its annual reconciliation for March 2009 through March 2010. UNDP has also hired a monitoring agent to review and monitor ANP remunerations and generate independent reports. UNDP staff told us that the LOTFA monitoring agent has offices in all regional police zones, which cover all of Afghanistan’s provinces. UNDP has reported that the monitoring agent operates in all ANP zones and conducts sample verifications of 30 percent of the total number of police. Similar to the World Bank’s controls over ARTF, UNDP’s financial controls over LOTFA face challenges stemming from Afghanistan’s security environment. SIGAR reported in April 2011 that security issues had impaired efforts by LOTFA’s monitoring agent to (1) recruit staff in a high-threat province and (2) travel in 7 of Afghanistan’s 34 provinces for half of 2010. SIGAR also reported that security concerns had delayed LOTFA’s reconciliation of 2009 salaries. UNDP officials also told us that security concerns had restricted UNDP movements in Afghanistan. UNDP’s financial controls also face challenges stemming from MOI’s institutional weaknesses. UNDP has reported that MOI’s “insufficient ownership and capacity development” remains one of LOTFA’s risks and that it has taken steps to mitigate this risk. Some problems that have been identified with MOI include the following: In 2009, we reported that MOI did not have an accurate staffing roster, according to CSTC-A, and that the number of ANP personnel was unclear. We found that uncooperative ANP commanders were impeding State and MOI efforts to implement a new ANP identification card system to positively identify all police for pay purposes, according to State officials. According to State officials, these commanders were preventing State and MOI from determining the status of nearly 30,000 individuals whose names had been submitted to receive ANP identification cards. We recommended that DOD and State consider provisioning future U.S. contributions to LOTFA to reflect the extent to which U.S. agencies had validated the status of MOI and ANP personnel to help ensure that the United States was not funding salaries of unverified personnel. In 2011, SIGAR reported that MOI’s payroll system provides little assurance that MOI is paying only working ANP personnel or that LOTFA funds are reimbursing only eligible ANP costs. MOI is also unable to pay all police through relatively secure systems. We have previously reported concerns regarding MOI pay systems. UNDP and CSTC-A have worked with MOI to develop electronic systems to reduce opportunities for skimming and corruption. One such system transfers funds directly into individual bank accounts established by individual Afghan police. Although progress has been made in establishing these systems, more than 20 percent of ANP staff are still paid using manual cash systems that are more vulnerable to abuse. The recent tripling of U.S. direct assistance awards to Afghan government entities, coupled with the vulnerability of this assistance to waste, fraud, and abuse in the uncertain Afghan environment, makes it essential that U.S. agencies assess risks before awarding funds and implement controls to safeguard those funds. Direct assistance to the Afghan government involves considerable risk given the extent of corruption, the weak institutional capacity of the Afghan government to manage finances, the volatile and high-threat security environment, and that the U.S. funds may be obligated months or years after they are awarded. Because conflict in many parts of Afghanistan poses significant challenges to efforts to ensure that funds are used as intended, the level of risk in Afghanistan warrants, to the extent feasible, sound internal controls and oversight over the billions of dollars that the U.S. government has invested in Afghanistan. Although risk assessment is a key component of internal controls, current USAID policy does not require preaward risk assessments of all Afghan government recipients of U.S. direct assistance funds. To safeguard U.S. direct assistance funds, it is important that (1) the USAID Administrator follow through on his July 2010 commitment to Congress to assess risks associated with each Afghan government entity before awarding funds, (2) USAID consistently implement controls it establishes in bilateral direct assistance agreements, and (3) USAID consistently adhere to its risk assessment policies for multilateral trust funds in awarding funds to ARTF. We recommend that the Administrator of USAID take the following three actions:  Establish and implement policy requiring USAID to complete risk assessments before awarding bilateral direct assistance funds to Afghan government entities in all cases.  Take additional steps to help ensure that USAID consistently implements controls established in its bilateral direct assistance agreements with Afghan government entities, such as requiring the retention of documentation of actions taken.  Ensure USAID adherence with its policies for assessing risks associated with multilateral trust funds in awarding funds to ARTF. We provided a draft of this report for comment to the Administrator of USAID, and to the Secretaries of Defense, State, and the Treasury. Defense, State, and the Treasury declined to provide comments. Treasury provided us with technical comments, which we incorporated in this report as appropriate. The World Bank and UNDP provided us with technical comments on the portions of the draft report that we provided them describing ARTF and LOTFA. We have incorporated these technical comments in this report as appropriate. USAID provided written comments on a draft of our report, which are reprinted in appendix II. With regard to our recommendation that USAID establish and implement policies requiring USAID staff to complete risk assessments before awarding bilateral direct assistance funds to Afghan government entities in all cases, USAID stated that its existing policies and procedures in ADS already include requirements for risk assessment for each form of government-to-government assistance mechanism. USAID noted that for host country contracts, ADS requires an advance assessment of a host government’s procurement systems (ADS 305). USAID also stated that for cash transfer agreements, ADS requires an analysis of a host government’s ability to comply with the agreements. USAID further stated that its general activity planning guidance contains a recommendation that USAID offices should consider the capacity of potential partners to implement planned functions (ADS 201). Although USAID policy in ADS includes some form of risk assessment for the funding mechanisms in use in Afghanistan, it does not require that a risk assessment be conducted in all cases. Specifically, ADS 305’s requirement for preaward assessment of host country contracts did not apply to six of the eight bilateral direct assistance cases we identified, because these six cases do not involve procurement. Further, according to the USAID comptroller, these six cases were not cash transfer agreements. As a result, these six cases fall under USAID’s general activity planning guidance (ADS 201), which recommends—but does not require--that USAID offices assess the capacity of potential partners in advance. As noted in this report, the lack of specific requirement resulted in USAID making awards in two cases prior to completing a risk assessment. Therefore, we retained our recommendation that USAID establish and implement policies requiring preaward risk assessments in all cases in Afghanistan. USAID also commented that it has taken additional steps to ensure that, going forward, risk assessments are completed in advance for each type of funding mechanism, in line with the Administrator’s July 2010 statement to Congress. Further, these steps are being undertaken “in light of” the Department of State’s July 14, 2011, certification to Congress that the U.S. and Afghan governments have established mechanisms within each implementing agency to ensure that certain fiscal year 2010 funds will be used as intended. On July 14, 2011, State did make this certification to Congress. However, the certification applies only to certain fiscal year 2010 funds, underscoring the need for USAID to establish a requirement for preaward assessments in Afghanistan in all cases in its policies and procedures. With regard to our recommendation that USAID take additional steps to help ensure that it consistently implements controls established in its bilateral direct assistance agreements with Afghan government entities, USAID agreed to take such steps concerning its host country agreements with Afghan government entities. In doing so, USAID noted that USAID policy is to be as sparing in exercising its prior approval rights as sound management permits. With regard to our recommendation that it adhere to its policies for assessing risks associated with multilateral trust funds in awarding funds to ARTF, USAID acknowledged that it had not always prepared or adequately documented its determinations for several ARTF grant amendments. USAID stated that it will follow its new procedures for such determinations, which it revised in April 2011. USAID also provided us with technical comments, which we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, State, and the Treasury; the Administrator of USAID; and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report assesses (1) the extent to which the U.S. Agency for International Development (USAID) and the Department of Defense (DOD) have increased direct assistance, (2) USAID’s and DOD’s steps to ensure accountability for bilateral direct assistance, and (3) USAID’s and DOD’s steps to ensure accountability for multilateral direct assistance. To identify the extent to which USAID and DOD had increased their direct assistance, we first met with officials from the Department of State and USAID to define the scope of the term “direct assistance” for the purpose of this review. We then adopted USAID’s definition of direct assistance (or “on-budget” assistance) as U.S. funds provided through the Afghan government national budget for use by Afghan ministries or other government entities. This definition is consistent with guidance and procedures developed by the Office of the Under Secretary of Defense (Comptroller) and DOD’s Combined Security Transition Command- Afghanistan (CSTC-A). We focused on fiscal year 2009 and fiscal year 2010 to identify funding developments tied to the President’s 2009 announcement of a new U.S. strategy for Afghanistan and subsequent pledges concerning direct assistance to the Afghan government.  To identify the extent to which USAID had increased its direct assistance, we obtained financial information from USAID’s mission in Kabul, Afghanistan. This information included USAID quarterly financial reports and USAID direct assistance agreements with Afghan government entities and the World Bank (including any modifications to the agreements). We used this information to identify the value of the direct assistance USAID awarded in fiscal years 2009 and 2010. For the value of each award, we used what USAID refers to as the “total estimated contribution” that it has committed to provide, subject to the availability of funds, in signing a direct assistance agreement. For the date, we used each agreement’s signature date, in keeping with USAID’s use of the signature date as the effective date of the funded activity. We used the signature dates to allocate each award’s value to either fiscal year 2009 or fiscal year 2010. In using this data in the report, we noted that once it has awarded funds on a specific date, USAID may obligate and disburse those funds over multiple years, depending on the terms of the agreement. We assessed these data to be sufficiently reliable for our purposes.  To identify the extent to which DOD had increased its direct assistance, we obtained financial information from DOD’s Office of the Under Secretary of Defense (Comptroller). This information included funds contributed to the Afghan Ministry of Defense (MOD) and the Afghan Ministry of Interior (MOI) by CSTC-A and the Defense Security Cooperation Agency. According to the Office of the Under Secretary of Defense (Comptroller), each DOD contribution to MOD, MOI, and the Law and Order Trust Fund for Afghanistan (LOTFA) was awarded, obligated, and disbursed in close succession. We allocated each contribution’s value to the fiscal year in which the contribution was made. We assessed these data to be sufficiently reliable for our purposes. To assess steps taken by USAID and DOD to help ensure the accountability of their bilateral direct assistance to Afghan ministries and other government entities, we reviewed the policies and practices the agencies use to assess risks associated with direct assistance and to establish control mechanisms over the use of direct assistance funds.  Our assessments were based on criteria drawn from GAO’s Standards for Internal Control in the Federal Government. Standards for Internal Control in the Federal Government, issued pursuant to the requirements of the Federal Managers’ Financial Integrity Act of 1982, provides the overall framework for establishing and implementing internal control in the federal government. Minimum internal control standards for providing reasonable assurance that agency assets will be safeguarded against fraud, waste, abuse, and mismanagement include risk assessment and control activities. Standards for Internal Control in the Federal Government defines risk assessment and control activities as key elements of an internal control framework. Risk assessment includes identifying internal and external risks an organization faces and their potential effect. Control activities are the policies and procedures (such as approvals, reconciliations, and reviews) agencies implement to mitigate identified risks and are essential for accountability of government resources.  To evaluate relevant USAID policies and practices against these criteria, we reviewed information from both headquarters and the USAID mission in Afghanistan. We reviewed USAID agencywide policies for awarding bilateral direct assistance funds to host government entities, as outlined in (1) USAID’s Automated Directives System (ADS) and (2) interim guidance USAID provided to its mission on the use of direct assistance. We reviewed bilateral direct assistance program information from the USAID mission in Afghanistan, including preaward assessment procedures and reports, training material, direct assistance agreements, compliance documentation, approval memorandums, memorandums of understanding, and mission orders. To identify USAID controls established over the use of direct assistance funds and determine whether USAID ensured compliance with its controls, we (1) reviewed all USAID bilateral direct assistance agreements, (2) identified the controls USAID established in each agreement, and (3) reviewed documentation USAID provided to us to demonstrate it had ensured compliance with its controls. We limited our analysis to controls triggered per the terms of each agreement before February 15, 2011. We also reviewed information from the USAID Office of Inspector General in Afghanistan regarding the mission’s preaward assessment process. We interviewed USAID officials in Washington, D.C., and in Kabul, Afghanistan.  To assess DOD policies and practices, we reviewed information from the Office of the Undersecretary of Defense (Comptroller) and CSTC- A. This information included the Under Secretary’s February 4, 2011, Interim Guidance on Afghanistan Security Forces Fund (ASFF) Contributions to the Government of the Islamic Republic of Afghanistan (GIRoA), CSTC-A’s standard operating procedures for direct contributions, DOD contribution letters to MOD and MOI, and DOD assessments of the strengths and weaknesses of these ministries. We also interviewed DOD officials in Washington, D.C., and Kabul. To assess steps taken by USAID and DOD to help ensure the accountability of their direct assistance to Afghan ministries through multilateral trust funds, we reviewed the policies and practices the agencies use to assess risks associated with direct assistance and to establish control mechanisms over the use of direct assistance funds.  Our assessments were again based on criteria drawn from GAO’s Standards for Internal Control in the Federal Government, which defines risk assessment and control activities as key elements of an internal control framework.  To evaluate relevant USAID policies and practices regarding multilateral trust funds against these criteria, we reviewed USAID agencywide policies for awarding direct assistance to multilateral trust funds such as the World Bank-administered Afghanistan Reconstruction Trust Fund (ARTF), as outlined in USAID’s Automated Directives System. We also reviewed ARTF-related program and budget documents from the USAID mission in Afghanistan, including USAID’s 2002 grant agreement with ARTF and modifications to the agreement. We also met with officials of the Department of the Treasury to coordinate our work regarding the World Bank. We reviewed World Bank documents concerning ARTF and interviewed USAID and World Bank officials in Washington, D.C., and in Kabul.  To assess DOD policies and practices regarding multilateral trust funds against these criteria, we reviewed information from the Office of the Undersecretary of Defense (Comptroller) and CSTC-A. This information included the Under Secretary’s February 4, 2011, Interim Guidance on Afghanistan Security Forces Fund (ASFF) Contributions to the Government of the Islamic Republic of Afghanistan (GIRoA) and CSTC-A’s standard operating procedures for direct contributions. We also reviewed United Nations Development Program (UNDP) documents and reports concerning the Law and Order Trust Fund for Afghanistan and interviewed DOD officials in Washington, D.C., and in Kabul, as well as UNDP officials in Kabul. Major contributors to this report were Tetsuo Miyabara, Assistant Director; Emily Gupta; Bruce Kutnick; Esther Toledo; and Pierre Toureille. Ashley Alley, Pedro Almoguera, Diana Blumenfeld, Jeffrey Baldwin-Bott, Gergana Danailova-Trainor, Martin De Alteriis, Karen Deans, Christopher Mulkins, Mona Sehgal, and Eddie Uyekawa also provided technical assistance.
The U.S. Agency for International Development (USAID) and the Department of Defense (DOD) award direct assistance to Afghanistan, using bilateral agreements and multilateral trust funds that provide funds through the Afghan national budget. GAO assessed (1) the extent to which the United States, through USAID and DOD, has increased direct assistance, (2) USAID and DOD steps to ensure accountability for bilateral direct assistance, and (3) USAID and DOD steps to ensure accountability for direct assistance via multilateral trust funds for Afghanistan. GAO reviewed USAID, DOD, and multilateral documents and met with U.S. officials and staffs of multilateral trust funds in Washington, D.C., and Afghanistan. The United States more than tripled its awards of direct assistance to Afghanistan in fiscal year 2010 compared with fiscal year 2009. USAID awards of direct assistance grew from over $470 million in fiscal year 2009 to over $1.4 billion in fiscal year 2010. USAID awarded $1.3 billion to the World Bank-administered Afghanistan Reconstruction Trust Fund (ARTF) in fiscal year 2010, of which the bank has received $265 million as of July 2011. DOD direct assistance to two ministries grew from about $195 million in fiscal year 2009 to about $576 million in fiscal year 2010, including contributions to fund police salaries through the United Nations Development Program-administered (UNDP) Law and Order Trust Fund for Afghanistan (LOTFA). USAID and DOD have taken steps to help ensure the accountability of their bilateral direct assistance to Afghan ministries, but USAID has not required risk assessments in all cases before awarding these funds. For example, USAID did not complete preaward risk assessments in two of the eight cases GAO identified. Although current USAID policy does not require preaward risk assessments in all cases, these two awards were made after the USAID Administrator's July 2010 commitment to Congress that USAID would not proceed with direct assistance to an Afghan public institution before assessing its capabilities. In these two cases, USAID awarded $46 million to institutions whose financial management capacity were later assessed as "high risk." USAID has established various financial and other controls in its bilateral direct assistance agreements, such as requiring separate bank accounts and audits of the funds. USAID has generally complied with these controls, but GAO identified instances in which it did not. For example, in only 3 of 19 cases did USAID document that it had approved one ministry's prefinancing contract documents. DOD personnel in Afghanistan assess the risk of providing funds to two security ministries through quarterly reviews of each ministry's capacity. DOD officials also review records of ministry expenditures to assess whether ministries have used funds as intended. DOD established formal risk assessment procedures in June 2011, following GAO discussions with DOD about initial findings. USAID and DOD generally rely on the World Bank and UNDP to ensure accountability over U.S. direct assistance provided multilaterally through ARTF and LOTFA, but USAID has not consistently complied with its risk assessment policies in awarding funds to ARTF. During GAO's review, DOD established procedures in June 2011 requiring that it assess risks before contributing funds to LOTFA. The World Bank and UNDP use ARTF and LOTFA monitoring agents to help ensure that ministries use contributions as intended. However, security conditions and weaknesses in Afghan ministries pose challenges to their oversight. For example, the ARTF monitoring agent recently resigned due to security concerns. The World Bank is now seeking a new monitoring agent and does not anticipate a gap in monitoring. In addition, weaknesses in the Ministry of Interior's systems for paying wages to police challenge UNDP efforts to ensure that the ministry is using LOTFA funds as intended. GAO recommends that USAID (1) establish and implement policy requiring risk assessments in all cases before awarding bilateral direct assistance funds, (2) take additional steps to help ensure it implements controls for bilateral direct assistance, and (3) ensure adherence to its risk assessment policies for ARTF. In commenting on the first recommendation, USAID stated that its existing policies call for some form of risk assessment for all awards and that it has taken new steps to ensure risk assessment. GAO retained its recommendation because existing USAID policies do not require preaward risk assessments in all cases. USAID concurred with GAO's other recommendations.
When a servicemember is charged with misconduct—such as drug use, insubordination, absence from the military without leave, or criminal behavior—the military can take action to separate the servicemember through either a punitive discharge or an administrative separation. A punitive discharge generally involves a trial by court-martial, where charges are filed and the case is adjudicated in a military court. In contrast, administrative separations involving misconduct charges are handled through a nonjudicial administrative process and can include attempts to correct and rehabilitate behavior and to counsel servicemembers on the impact of being separated for misconduct. There are two main types of separations for misconduct for enlisted servicemembers: administrative separations for misconduct and administrative separations in lieu of trial by court-martial. Administrative separation for misconduct is an involuntary separation of a servicemember who is unqualified for further military service. Examples of the behaviors that can lead to an administrative separation for misconduct include behaviors ranging from a pattern of minor disciplinary infractions to the commission of a serious military or civilian offense. Administrative separation in lieu of trial by court-martial is when a servicemember facing trial by court-martial voluntarily requests to be discharged from military service, and, if approved, the separation case is then handled through the administrative process. The process for separating servicemembers varies slightly between the two separation types. (See fig. 1.) When a servicemember separates from the military, DOD characterizes the nature of that servicemember’s military service. Administrative separations generally result in one of three potential characterizations of service, which determine a servicemember’s eligibility for VA benefits and services. Specifically, servicemembers who receive an “honorable” characterization of service are eligible for all VA benefits and services; servicemembers who receive a “general” characterization of service are eligible for most VA benefits and services, with the exception of some VA education assistance; and servicemembers who receive an “other than honorable” characterization of service may not be eligible for any VA benefits and services, including health care. According to DOD policy, servicemembers separating under either type of administrative misconduct separation will normally receive a characterization of service of “other than honorable,” though other characterizations may be warranted depending on the circumstances. Congress has enacted several laws and DOD has established several additional policies governing the military services’ handling of misconduct separations in cases involving PTSD and TBI. Screening servicemembers prior to separation. The National Defense Authorization Act for Fiscal Year 2010, as amended, requires each military service to provide, under policies established by DOD, a medical examination for certain servicemembers diagnosed with PTSD or TBI who are facing administrative separation under conditions other than honorable, including administrative separations in lieu of court-martial. The purpose of the medical examination—or screening—is to assess whether these conditions were a mitigating factor in the behavior that resulted in the misconduct charge. The law prohibits the administrative separation of these servicemembers under conditions other than honorable until the results of the screening have been reviewed by appropriate authorities responsible for the separation case, as determined by the relevant military service. The law’s screening requirements only apply to certain servicemembers facing administrative separation under conditions other than honorable. As relevant here, the law requires a screening for those servicemembers who have been deployed overseas in support of a contingency operation during the previous 24 months and have been diagnosed with PTSD or TBI by one of five provider types: (1) a physician, (2) clinical psychologist, (3) psychiatrist, (4) licensed clinical social worker, or (5) psychiatric advanced practice registered nurse. The list of providers eligible to diagnose PTSD and TBI is not the same as the list of providers eligible to conduct PTSD screenings or the list of providers eligible to conduct TBI screenings. Specifically, the statute restricts PTSD screenings to one of four provider types—(1) a clinical psychologist, (2) psychiatrist, (3) licensed clinical social worker, or (4) psychiatric advanced practice registered nurse. In comparison, TBI screenings may be performed by (1) a physician, (2) clinical psychologist, (3) psychiatrist, or (4) other health care professional, as appropriate. As a result of this statutory scheme, physicians may diagnose PTSD and TBI, and they may also perform TBI screenings. However, they may not perform PTSD screenings. DOD updated its separation policy first in 2010 and most recently in 2014 in response to changes in the law. Among other things, DOD required each of the military services to issue its own guidance to implement the new requirements within that military service. Training on PTSD and TBI. Although not required by law, DOD has established a policy for the management of mild TBI, also known as concussion, in the deployed setting. The policy directs the military services to develop and support effective training plans for servicemembers on early detection of potential mild TBI in the deployed setting. While DOD does not have—nor is it legally required to have— any identifiable policies that require training of PTSD-specific symptoms, it does have a policy that requires training on combat and operational stress reactions, which may include sleep disturbance, anger and difficulty concentrating. While combat and operational stress reactions may overlap with PTSD symptoms, the conditions are distinct. Counseling on potential ineligibility for VA benefits and services. DOD has several policies related to counseling servicemembers on aspects of DOD’s separation policy. Some counseling policies apply to all servicemembers or to those separating under honorable or general conditions. One is specific to separations in lieu of trial by court-martial. DOD policy requires the military services to establish procedures for periodically counseling all servicemembers throughout their careers on DOD’s separation policy, including the types of separation and how particular actions might affect a servicemember’s eligibility for VA benefits and services. DOD policy requires that the military services offer legal counsel to servicemembers requesting separation in lieu of trial by court-martial prior to the servicemember submitting his or her request to be separated. The servicemember may refuse to meet with counsel, but must state in the written request for separation that he or she understands that such request may result in an “other than honorable” characterization of service and the consequences, such as potential loss of benefits, associated with that characterization. Our analysis of DOD data shows that 91,764 servicemembers were separated for misconduct from fiscal years 2011 through 2015; of these servicemembers, 57,141—62 percent—had been diagnosed within the 2 years prior to their separation with PTSD, TBI, or certain other conditions that could be associated with misconduct. More specifically, 16 percent, or 14,816 of the 91,764 servicemembers who were separated for misconduct, had been diagnosed with PTSD or TBI. Looking at the conditions individually, 8 percent had been diagnosed with PTSD and 11 percent had been diagnosed with TBI, while other conditions, such as adjustment and alcohol-related disorders were more common. (See table 1.) For additional data on prior diagnoses of servicemembers separated for misconduct, see appendix II. The 57,141 servicemembers who were separated for misconduct and diagnosed within the 2 years prior to separation with PTSD, TBI, or certain other conditions had, on average, 4 years of active military service. Almost all, or 98 percent, were enlisted servicemembers, rather than officers, and two-thirds had not been deployed overseas within the 2 years prior to separation. For additional data on total separations and separations for misconduct, see appendix III. Further, of these servicemembers, 23 percent, or 13,283, received an “other than honorable” characterization of service, making them potentially ineligible for VA benefits and services, including health care. The majority—that is, 71 percent—of the servicemembers who were separated for misconduct and diagnosed within the 2 years prior to separation with PTSD, TBI, or certain other conditions received a “general” characterization of service. (See fig. 2.) Within the smaller population of servicemembers separated for misconduct and diagnosed with PTSD or TBI, these proportions were the same—that is, 23 percent received an “other than honorable” characterization of service and 71 percent received a “general” characterization of service. Our analysis of VA data shows that as of June 2016, of the 13,283 servicemembers who were separated for misconduct with an “other than honorable” characterization and diagnosed with one of the conditions we reviewed, 87 percent had not submitted a claim to VA for benefits and services or completed VA’s determination process. Twelve percent submitted a claim and were determined eligible for at least some VA benefits and services, including health care, while the remaining 1 percent submitted a claim and were determined ineligible for all VA benefits and services. For additional data on the characterizations of service for servicemembers separated for misconduct and previously diagnosed with PTSD, TBI, or certain other conditions, see appendix IV. We found two of the four military services’ policies are inconsistent with DOD policies related to screening servicemembers for PTSD and TBI prior to separation and training servicemembers on the early detection of mild TBI symptoms in the deployed setting. In contrast, we found that the counseling policies for all four military services are consistent with DOD policy. (See appendix V for more information on the individual policies that the Army, Marine Corps, Air Force, and Navy identified as implementing applicable DOD policy requirements.) Screening servicemembers prior to separation. We found that the Marine Corps’ screening policies are consistent with DOD’s screening policies, the Army’s policy is consistent but set to expire March 31, 2018, and the Air Force’s and Navy’s screening policies are not consistent with DOD policy. (See fig. 3.) For the purposes of this review, we compared the military services’ screening policies to DOD’s policy governing screenings for certain servicemembers deployed overseas in support of a contingency operation during the previous 24 months. Our assessment was based on the following requirements set forth in DOD’s screening policy, which align with applicable statutory requirements: 1. do the military services’ screening policies apply to servicemembers diagnosed with PTSD or TBI by a physician, clinical psychologist, psychiatrist, licensed clinical social worker, or psychiatric advanced practice registered nurse; 2. do the military services’ screening policies apply to servicemembers facing administrative separation under conditions other than honorable, including those separating in lieu of trial by court-martial; and 3. do the military services’ screening policies identify an appropriate official to review the results of the screening before deciding whether the servicemember’s service was “other than honorable?” We did not assess the military services’ consistency with DOD’s screening policy regarding the types of providers eligible to perform screenings because at the time of our review, DOD’s policy was inconsistent with the law in one respect. In particular, DOD’s policy permitted physicians to conduct PTSD screenings, whereas the law only authorizes clinical psychologists, psychiatrists, licensed clinical social workers, and psychiatric advanced practice registered nurses to do so. DOD officials stated that they would take steps to correct this issue, which they did on February 27, 2017 by publishing a revised policy that removed physicians from the list of providers who can conduct PTSD screenings. Specifically we found that: The Marine Corps’ screening policy not only meets, but exceeds, key DOD policy requirements. In particular, the Marine Corps’ screening policy requires screenings for servicemembers who have been previously diagnosed with PTSD or TBI or who allege its effects, regardless of the characterization of their service or whether they served overseas in support of a contingency operation. In April 2017, the Army reissued its temporary policy requiring separation authorities to ensure screenings for servicemembers diagnosed with PTSD or TBI, consistent with DOD policy. The policy applies to servicemembers deployed overseas in support of a contingency operation during the previous 24 months and who are facing administrative separation under conditions other than honorable, including those separating in lieu of trial by court-martial. While this policy is consistent with DOD’s screening policy, it is set to expire on March 31, 2018, and the Army’s permanent separation regulation has not yet been updated to reflect DOD’s screening requirements under the statute, as amended. The Air Force’s screening policy, which is set to expire no later than June 2017, is inconsistent with DOD’s policy in two respects. First, the screening requirement under DOD’s policy extends to servicemembers who request separation in lieu of trial by court- martial, whereas the Air Force’s policy excludes this group. An Air Force official told us that PTSD and TBI screenings are, in fact, given to this group because separating servicemembers are required to be asked about PTSD and TBI symptoms as part of their physical prior to separation. However, a separation physical does not meet the requirements of a PTSD or TBI screening for the purpose of determining whether the condition is a possible mitigating factor in the separation characterization of servicemembers who request separation in lieu of trial by court-martial, as required by DOD policy. Second, DOD policy requires that any qualifying servicemember who is diagnosed with PTSD or TBI by one of five specified provider types, including a licensed clinical social worker or psychiatric advanced practice registered nurse, must receive a screening to determine whether the condition is a potentially mitigating factor in the conduct in question. However, Air Force policy does not require such a screening unless the diagnosis is made by a doctoral-level provider, thereby not recognizing diagnoses made by licensed clinical social workers and psychiatric advanced practice registered nurses. An Air Force official told us that the intent of the policy is to ensure that diagnoses made by these providers are reviewed and confirmed by doctoral-level providers. However, imposing such a condition on the diagnoses made by these provider types is inconsistent with DOD policy, which recognizes that their diagnoses are sufficient to trigger the screening requirement. Similar to the Air Force, the Navy’s screening policy excludes servicemembers who request separation in lieu of trial by court-martial and is likewise inconsistent with DOD’s screening policy. This is because the Navy’s separation policy only requires screenings prior to involuntary separations, whereas the Navy elsewhere defines a separation in lieu of trial by court-martial as a voluntary separation. While the Navy, like the Air Force, requires all servicemembers to undergo a separation physical that includes questions about PTSD and TBI, such a procedure does not meet the requirement of a screening for the purpose of assessing whether the condition is a mitigating factor in the misconduct charged in the separation case. Our review did not evaluate whether the inconsistencies we identified had an impact on any individual servicemember who may have been entitled to a screening. However, unless the military services rectify these inconsistences, DOD is at risk that some servicemembers may be deprived of a required screening for PTSD or TBI or may not have the results of a screening taken into account as a possible mitigating factor in their misconduct, as required by DOD policy. Some of the inconsistencies in military service screening policies appear to have existed since DOD updated its screening policy in 2014 in response to various statutory amendments enacted in 2013. For example, neither the Navy nor Air Force have updated their policies to expressly require PTSD or TBI screenings for servicemembers requesting separation in lieu of trial by court-martial for the purpose of determining whether the condition is a mitigating factor in the servicemember’s separation characterization. While each military service has released at least one policy update to correspond with other updates made by DOD in 2014, none of the Navy or Air Force updates pertain to servicemembers requesting separation in lieu of trial by court-martial. Training on PTSD and TBI. We found that two of the four military services’ TBI training policies are inconsistent with DOD policy. (See table 2.) DOD policy requires the military services to develop effective training plans for all servicemembers on the early detection of potential concussion or mild TBI in the deployed setting. As previously noted, DOD does not have a specific policy that requires training on how to identify the symptoms of PTSD. We found that the Army’s temporary policy and the Marine Corps’ policy are consistent with DOD’s policy requiring that all servicemembers be trained on how to identify mild TBI symptoms in the deployed setting, but the Air Force’s and the Navy’s policies are not. Specifically we found that: the Air Force training policy incorporates DOD’s policy by reference and assigns responsibility to an Air Force component to develop training plans. However, as of April 2017, Air Force officials had not identified whether this component had issued such training plans. While the Navy’s TBI policy requires that training be provided to certain health providers assigned to military treatment facilities, the policy does not extend to all servicemembers in the deployed setting, consistent with DOD’s policy. Because of these inconsistencies, Air Force and Navy increase their risk that some servicemembers, including officers, may not be getting trained on how to identify symptoms of mild TBI in the deployed setting. Counseling on potential ineligibility for VA benefits and services. All four military services have policies that require all servicemembers to be counseled about their eligibility for VA benefits and services at multiple points in their career, with additional counseling requirements for servicemembers who are requesting separation in lieu of trial by court- martial. This is consistent with DOD policy. All four military services established periodic counseling policies, including a requirement for servicemembers to be briefed on eligibility for VA benefits and services at least twice in the first year of service. All four military services also established policies requiring that servicemembers who are considering separating in lieu of trial by court-martial be offered legal counsel who, among other things, advises the servicemember on his or her potential ineligibility for VA benefits and services as a result of an “other than honorable” characterization of service. In our review of separation documents and interviews with installation- level Army and Marine Corps officials we found that the Army and Marine Corps may not have adhered to their own screening, training, and counseling policies. Screening servicemembers prior to separation. Our review of the Army and Marine Corps implementation of screening policies identified instances in which servicemembers may not always have been screened for PTSD and TBI and that screening results may not have been reviewed by the appropriate officials as required by the military services’ policies. In our review of separation packets for Army servicemembers, we found the following: In 2010, the Army issued a policy requiring PTSD and TBI screening for servicemembers administratively separated for misconduct who meet certain requirements. In our review of 46 separation packets for Army servicemembers separated for administrative misconduct from fiscal years 2011 to 2015, we did not find screening documentation in 16 packets, and in 1 of the packets the documentation was unclear. In October 2013, the Army updated its policy to require screenings for servicemembers separating in lieu of trial by court-martial. In our review of 7 separation packets for Army servicemembers separated in lieu of trial by court-martial after the policy update to the end of fiscal year 2015, we did not find screening documentation in 5 of the packets. In addition, the Army’s October 2013 update required the separation authority to review the screenings prior to making a final separation decision. In our review of 21 separation packets for Army servicemembers who were administratively separated for misconduct or in lieu of trial by court-martial after the policy update to the end of fiscal year 2015, we did not find documentation of review by the separation authority in 4 of the packets, and in 5 of the packets the documentation was unclear. As a result, PTSD or TBI may not have been identified as a possible mitigating factor in the separation. In our review of separation packets for Marine Corps servicemembers, we found the following: Of those Marine Corps servicemembers who were administratively separated for misconduct from fiscal years 2011 to 2015, screening documentation was missing for 18 of the 48 separation packets we reviewed and in 2 packets the documentation was unclear. As such, it is unclear whether these Marine Corps servicemembers were screened for PTSD and TBI as required under DOD policy. In addition, 19 of the 48 separation packets we reviewed did not have documented evidence that the appropriate official reviewed the screening prior to separation, and in 1 packet the documentation was unclear. As such, we cannot be certain that these Marine Corps servicemembers’ screening results were reviewed by the appropriate official prior to the servicemembers being separated. While the absence of documentation does not prove that screenings and review by appropriate officials did not occur, it does suggest that Army and Marine Corps policies may not have been followed. In interviews with Army and Marine Corps installation-level defense counsel who represent servicemembers, some additional concerns regarding the PTSD and TBI screenings were mentioned. For example, Marine Corps defense counsel raised concerns that screenings might be occurring after key separation recommendations are being made and that the tools used to determine whether servicemembers have either condition should have a more comprehensive exam for PTSD and TBI. During our review of separation packets we found one instance in which a screening did not occur before a board was convened; however, the board members did discuss PTSD during the proceedings. In addition, Army and Marine Corps defense counsel raised concerns that providers might be pressured by commanders to clear servicemembers. Training on PTSD and TBI. During our Army and Marine Corps installation interviews, we found that some Army officers who should have received training on how to identify symptoms of TBI may not have; further, installation-level officials for both the Army and the Marine Corps could not produce documentation for officers who had received the training. During our site visits to the Army installations, noncommissioned officers, in particular, told us that they did not receive specific training on how to identify symptoms for TBI. In contrast, commissioned officers were more likely to recall that they had received the training, often citing the officer leadership courses as the way they received the information. When we asked officials at both Army and Marine Corps installations for documentation of training, officials said that while they have sign-in sheets for the servicemembers who participate in the training, they do not maintain them or other documentation, such as training logs, at the installations because they are not required to do so. Officers at one of the Army and the Marine Corps installations we visited said that other efforts—beyond training courses—to build awareness of PTSD and TBI have been helpful. In particular, they said that a DOD- wide initiative to have a mental health provider “embedded” in the units has been beneficial in helping them to identify these symptoms in the servicemembers under their command. The officers added that discussing a servicemember’s behavior with a mental health provider often resulted in a medical referral instead of initiating the process for separating a servicemember for misconduct. Counseling on potential ineligibility for VA benefits and services. We found instances in which both the Army and the Marine Corps may not have adhered to their own counseling policies, specifically prior to servicemembers’ request for separation in lieu of trial by court-martial. In 11 of the 48 packets included in our analysis of Army servicemembers who requested separation in lieu of trial by court-martial, we found that there was no documented evidence or the evidence was unclear as to whether the servicemembers were counseled on their potential ineligibility for VA benefits and services. Similarly, with respect to the Marine Corps, 4 of the 15 packets we reviewed for servicemembers who requested separation in lieu of trial by court-martial were missing documented evidence that the servicemembers were made aware of their potential ineligibility for VA benefits and services. As such, we cannot be certain these servicemembers were counseled on potential ineligibility for VA benefits and services as required. Army and Marine Corps installation-level defense counsel said that if a servicemember elects to meet with legal counsel prior to requesting separation in lieu of trial by court-martial, counsel reviews how receiving an “other than honorable” characterization can affect the servicemember’s eligibility for VA benefits and services as part of the counseling. Officials from both military services also stated that servicemembers are counseled on how their characterization of service could affect their eligibility for VA benefits and services as part of their initial training. Moreover, installation-level officials said that commanders at multiple levels discuss the impact of certain characterizations on eligibility for VA benefits and services as part of their misconduct counseling with servicemembers. Marine Corps officials said that early counseling is done through one-on-one conversations, and in many cases only once the formal separation is about to begin do the counseling sessions become documented. However, as previously reported, we cannot be certain that counseling policies for separations in lieu of trial by court-martial were always implemented as required based on the results of our packet review. DOD does not routinely monitor the military services’ adherence to policies for screening servicemembers for PTSD and TBI prior to separating them for misconduct, training officers on how to identify symptoms of TBI in the deployed setting, and counseling servicemembers on eligibility for VA benefits and services. According to DOD officials, the expectation is that the military services are responsible for monitoring adherence to these polices. While both Army and Marine Corps have some data available that could make it possible for them to monitor whether their screening, training, and counseling policies are being adhered to as required, the two military services are not routinely using these data to do so because of limited resources in some instances, according to an official. Federal internal control standards call for agencies to establish activities to monitor internal control systems and evaluate results. Without monitoring adherence to these policies, the military services cannot provide assurance that certain servicemembers are screened for PTSD and TBI prior to separation; all servicemembers, including officers at all levels, are trained on how to identify symptoms of mild TBI in the deployed setting; and servicemembers are counseled about VA benefits and services during the separation process. Screening servicemembers prior to separation. Neither the Army nor the Marine Corps routinely monitors whether its screening policies related to PTSD and TBI are adhered to. According to an Army headquarters official, it is the responsibility of commanders and staff judge advocates at each Army installation to review a servicemember’s separation material and make sure that the required screening documents are included before final separation decisions are made. Such documents can indicate whether or not servicemembers have been screened for PTSD or TBI. However, the official added that the Army does not have a systematic method for monitoring whether certain servicemembers are being screened. As previously discussed, we found that screening documents were not always included in the Army separation packets that we reviewed. Recent Army audits have found that the Army did not have sufficient documentation to demonstrate that it was always considering whether PTSD or TBI was a mitigating factor in servicemembers’ behavior prior to separating them for misconduct. Specifically, the Army Audit Agency conducted two audits that reviewed separation and medical records of servicemembers who had been separated with a characterization of service of “other than honorable” and had been deployed and diagnosed with PTSD or TBI within 24 months of their separation for the period of November 1, 2009 through July 31, 2015. The Army Audit Agency found that not all of the separation packets it reviewed had documentation showing that screening for PTSD and TBI had occurred prior to the servicemember’s separation for misconduct. While the Army does not have a systematic method for monitoring, it does have data available that could be used to routinely monitor whether screening is occurring. Specifically, the Army has access to servicemembers’ medical records and could review the medical records of certain servicemembers being separated for misconduct to determine if they had been previously diagnosed with PTSD or TBI and therefore should be screened. According to Army officials, Army Behavioral Health System of Care personnel are building an electronic program that will create an automated record of all screenings. Officials added that this electronic program is expected to be completed in fiscal year 2017. In the case of the Marine Corps, officials recognize that monitoring is necessary; however, an official told us that the Marine Corps does not have sufficient data to routinely monitor whether screenings are occurring. The officials told us the Marine Corps was exploring options for analyzing separation and medical data to see whether separating servicemembers had been screened for PTSD and TBI. The officials explained that the Marine Corps has a new electronic system that allows commanders at the installation level to input real time information about an administrative separation, which can be viewed by officials at Marine Corps headquarters. The officials stated that this system could be used to help identify separation issues, such as unaccounted for screenings prior to a servicemember being separated. However, one of the officials told us that the Marine Corps installations are only using the electronic system in forty percent of separation cases. Furthermore, the official stated that medical personnel would be required to review medical records of certain servicemembers to determine if they had been diagnosed with PTSD or TBI, but the Marine Corps does not have the resources to hire this personnel. Training on PTSD and TBI. Although Army and Marine Corps officials told us they collect data about the training provided as part of the TBI programs, they do not use these data to routinely monitor adherence with their training policies. As previously discussed, Army and Marine Corps officials told us they collect attendance for the servicemembers, including officers, who participate in the trainings. However, according to officials, these data are not used to identify officers who may not have received training or to routinely monitor whether the Army and Marine Corps are adhering to DOD’s policy that all servicemembers receive training on how to identify mild TBI symptoms in the deployed setting. Because DOD does not have a policy requiring PTSD training, and the military services have not developed such policies on their own, there is no policy for the DOD and the military services to monitor adherence against. Counseling servicemembers about VA benefits and services. While Army and Marine Corps collect some data about the counseling provided to servicemembers, the two military services do not routinely use these data to ensure adherence with their counseling policies. For example, Marine Corps headquarters officials told us that they collect attendance records for the Marine Corps’ 90-day personnel readiness training, which includes counseling on VA benefits and services, but the data collected are not reviewed to ensure that servicemembers have received the required counseling. For the counseling that is provided to servicemembers being separated in lieu of trial by court-martial, Army and Marine Corps officials told us that documentation of this counseling should be included in each servicemember’s separation packet. However, officials from both military services told us that servicemembers’ separation packets are not reviewed by officials at the military services’ headquarters to ensure that counseling is occurring. Similar to how the PTSD and TBI screening documents are reviewed, officials explained that military staff at the installations should review the separation material to make sure that the counseling documents are included. For example, an Army headquarters official told us that the Army relies on staff judge advocates at the installations to review servicemembers’ separation material and make sure that the required counseling documents are included prior to a final separation decision. However, as previously noted, in our review of a sample of Army and Marine Corps separation packets, we found that counseling documents were not always present and therefore could not confirm that the counseling had occurred. DOD’s policies and the policies of the four military services—Army, Marine Corps, Air Force, and Navy—are intended to ensure that PTSD and TBI are appropriately considered before a servicemember is separated for misconduct. However, we found that Air Force and Navy’s pre-separation screening and training policies are inconsistent with DOD policy. Furthermore, we found that the Army and Marine Corps may not always be adhering to their own policies and that monitoring of the policies—which could include a review of documentation, data analyses, or other oversight mechanisms—by DOD, Army, and Marine Corps is limited. While we did not review whether the Air Force or Navy adhered to or monitored their policies, we identified inconsistencies between their policies and DOD’s policies. As a result of policy inconsistencies and limited monitoring, DOD has little assurance that certain servicemembers diagnosed with PTSD or TBI receive the required screening and counseling prior to being separated for misconduct and that all servicemembers, including officers, have been trained on how to identify symptoms of mild TBI in the deployed setting. Unless the policy inconsistencies are resolved and routine monitoring is undertaken to ensure adherence, the risk increases that servicemembers may be inappropriately separated for misconduct without adequate consideration of these conditions’ effects on behavior, separation characterization, or eligibility for VA benefits and services. To increase its assurance that PTSD and TBI are appropriately considered prior to separating certain servicemembers from the military for misconduct, the Secretary of Defense should take the following five actions: Direct the Air Force and Navy to address inconsistencies with DOD policy in their policies related to screening certain servicemembers, including servicemembers separating in lieu of trial by court-martial, for PTSD and TBI and reviewing the results prior to separation for misconduct; and training servicemembers, including officers, on how to identify mild TBI symptoms in the deployed setting. Ensure that the military services routinely monitor adherence to policies related to screening certain servicemembers for PTSD and TBI prior to separation for misconduct; training servicemembers, including officers, on how to identify mild TBI symptoms in the deployed setting; and counseling about VA benefits and services during the process of separating certain servicemembers for misconduct. DOD and VA reviewed a draft of this report. DOD provided general comments, which are reprinted in appendix VI, and technical comments, which we incorporated as appropriate. VA did not provide written comments on this report, but the department indicated that it will continue to raise awareness of PTSD and TBI programs available to veterans, including veterans with a less than honorable discharge. In its comments, DOD concurred with four of our five recommendations. Specifically, DOD agreed with our recommendation to direct the Air Force and the Navy to address inconsistencies in their policies related to screening certain servicemembers for PTSD and TBI and reviewing the results prior to separation for misconduct. DOD also agreed with our recommendations to ensure that the military services routinely monitor adherence to policies related to screening certain servicemembers for PTSD and TBI prior to separation for misconduct; counseling servicemembers about VA benefits and services during the separation process; and training servicemembers, including officers, on how to identify mild TBI symptoms. On this last monitoring recommendation related to training, based on additional information from DOD we revised the language of the original recommendation to clarify that the military services should monitor the training provided to servicemembers on how to identify mild TBI symptoms in the deployed setting. DOD did not concur with our recommendation that it direct the Air Force and the Navy to address inconsistencies between their policies and DOD’s policy related to training servicemembers on how to identify mild TBI symptoms. In its comments, DOD said that because our recommendation did not specify that training to identify mild TBI symptoms was in the deployed setting, we were in effect creating new policy. We have revised the language in our recommendation to clarify this point. However, this clarification does not obviate the need for our recommendation because the inconsistencies we identified in the Air Force and Navy policies were material to their mild TBI training requirements in deployed settings. In particular, the Air Force provided no documentation of the training plans it was responsible for developing, while the Navy’s mild TBI training requirement only applied to certain health providers. DOD also raised four concerns about our data analysis. First, DOD raised concern that data from DOD’s Defense Manpower Data Center (DMDC) on the total number of separations were lower than Army and Marine Corps data. We obtained our data from DMDC, and over the course of our audit assessed the reliability of the data provided in several ways, including comparing it with published sources and discussing the data with officials from the military services and DMDC. Differences between DMDC’s and the military services’ data were reviewed and, where necessary, explanations were identified and noted. The discrepancy in total number of separations noted in DOD’s comments stems largely from differences in which servicemembers were included. In their analyses of total separations, the Army and the Marine Corps included servicemembers who separated from active duty and transferred directly to National Guard or Reserve duty, whereas the data provided by DMDC that we used for our analysis excluded these servicemembers. We excluded these servicemembers because our review focuses on servicemembers separating to civilian life. In response to DOD’s concern, we clarified this exclusion in the report. Further, these military services confirmed that DMDC’s misconduct separations data—the data on which our findings are based—are consistent with their data. Second, DOD raised concern that the number of servicemembers we report as having been diagnosed with PTSD or TBI—14,816—is inaccurate because of double counting. We disagree. As stated in the report, collectively, about 16 percent, or 14,816, of the 91,764 servicemembers who were separated for misconduct had been diagnosed with PTSD or TBI. As we explain in footnote 28, this proportion is lower than the sum of the proportion diagnosed with PTSD (8 percent) and the proportion diagnosed with TBI (11 percent) because some servicemembers had been diagnosed with both conditions. However, in response to DOD’s concern, we made additional clarifications in the report. Third, DOD expressed concern about some of the conditions included in our group of conditions other than PTSD and TBI that could be associated with misconduct. As described in our methodology, we developed our list of conditions through conversations with DOD and mental health professionals and reviews of relevant publications. We ultimately selected conditions that could potentially be caused or exacerbated by military service, or that could be misdiagnosed as PTSD or TBI. For example, one of the reasons we included depressive disorders was the discussion of major depressive disorder and depressive symptoms in a 2008 RAND study. This study noted that depression could be linked to specific war experiences such as loss, and affects mood, thoughts, and behavior, but often goes unrecognized and unacknowledged. Furthermore, because we understand that there can be conflicting evidence on these issues, in key parts of the report we separate discussions of PTSD and TBI data from the data on other health conditions. In response to DOD’s concern, we also included additional data on PTSD and TBI separately from the data related to other health conditions. Finally, DOD expressed concern that our use of the word “officials” to describe servicemembers’ defense counsel implied that they represented the interests of the military services rather than the servicemembers. In response, we removed the word “officials” in describing defense counsel. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Veterans Affairs, the Secretaries of the Air Force, Army, and Navy, the Commandant of the Marine Corps, the Assistant Secretary of Defense for Health Affairs, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To determine the number of servicemembers—officers and enlisted— separated for misconduct in fiscal years 2011–2015 and diagnosed with post-traumatic stress disorder (PTSD), traumatic brain injury (TBI), or certain other conditions, we obtained data from the Department of Defense’s (DOD) Defense Manpower Data Center (DMDC) and the Defense Health Agency (DHA). DMDC provided us with data on total separations of active duty servicemembers from the Army, Air Force, Marine Corps, and Navy, as well as a list of active duty servicemembers who were administratively separated for misconduct or in lieu of trial by court-martial from the military services during this timeframe. For each servicemember who was administratively separated for misconduct or in lieu of trial by court-martial, DMDC provided the characterization of service upon separation as well as other relevant data. For the data on total separations and separations for misconduct, we included only servicemembers who were separated to civilian life—that is, we excluded separations due to factors such as death, joining officer commissioning programs, joining military academies, or separating to the National Guard or Reserve. If servicemembers had multiple relevant separation dates, we used only the most recent date. We excluded Reservists and National Guard members because, according to DMDC officials, reliable data were not available for the Army Reserve and National Guard. For each servicemember who was identified by DMDC as having been administratively separated for misconduct or in lieu of trial by court- martial, DHA provided us with data on whether the servicemember was diagnosed with PTSD, TBI, or certain other conditions within the 2 years prior to the servicemember’s separation date. To determine which conditions, in addition to PTSD and TBI, to include in our analysis, we reviewed relevant literature and consulted with clinical experts regarding the prevalence in the military population of conditions that cause behaviors that could contribute to misconduct. The conditions we selected are adjustment disorders, alcohol-related disorders, anxiety disorders, bipolar disorders, depressive disorders, personality disorders, and substance-related disorders. To determine how many of the servicemembers who were separated for misconduct and previously diagnosed with PTSD, TBI, or certain other conditions were potentially ineligible for Department of Veterans Affairs (VA) benefits and services, we analyzed the data provided by DMDC on servicemembers’ characterizations of service upon separation. In addition, we obtained data from VA’s Veterans Benefits Administration on the extent to which servicemembers who were administratively separated for misconduct or in lieu of trial by court-martial and previously diagnosed with PTSD, TBI, or certain other conditions were deemed ineligible by VA for benefits and services. Specifically, we provided VA with a list of servicemembers who DMDC and DHA data show were administratively separated for misconduct or in lieu of trial by court-martial in fiscal years 2011-2015 and diagnosed within the prior 2 years with PTSD, TBI, or another condition included in our study. For each of these servicemembers, the Veterans Benefits Administration provided data on (1) whether the servicemember ever submitted a claim to VA for benefits or services, as well as the date of the most recent claim; (2) whether VA’s characterization of service determination process was ever completed for the servicemember, as well as the completion dates; and (3) the outcome of this process each time it was performed—that is, whether the servicemember was deemed eligible or ineligible for VA benefits and services. We assessed the reliability of the data provided by DMDC, DHA, and VA’s Veterans Benefits Administration in several ways, including discussing the reliability of the data with DOD and VA officials, performing electronic tests of the data to identify any outliers or anomalies, reviewing relevant documentation, and comparing the data with data from published sources. We determined that the data were sufficiently reliable for the purposes of our reporting objectives. Table 3 provides additional information on the extent to which servicemembers who were separated for misconduct from fiscal years 2011 through 2015 were previously diagnosed with post-traumatic stress disorder (PTSD) or traumatic brain injury (TBI), previously diagnosed with certain other conditions, or not previously diagnosed with any of the conditions. Tables 4 and 5 provide additional information on the extent to which servicemembers who were administratively separated for misconduct or administratively separated in lieu of trial by court-martial from fiscal years 2011 through 2015 were previously diagnosed with the individual conditions included in our study. Table 6 provides information on total separations of active duty servicemembers from military service from fiscal years 2011 through 2015. Tables 7 and 8 provide information on administrative separations for misconduct and administrative separations in lieu of trial by court- martial for active duty servicemembers during this timeframe. Tables 9 and 10 provide information on characterization of service for servicemembers who were administratively separated for misconduct or administratively separated in lieu of trial by court-martial from fiscal years 2011 through 2015 and diagnosed within the 2 years prior to separation with post-traumatic stress disorder (PTSD), traumatic brain injury (TBI), or certain other conditions. All four military services have policies related to screening servicemembers for post-traumatic stress disorder (PTSD) and traumatic brain injury (TBI) prior to separation; training servicemembers, including officers, on how to identify mild TBI symptoms in the deployed setting; and counseling servicemembers on eligibility for Department of Veterans Affairs (VA) benefits and services. Table 11 provides a brief description of the screening requirements outlined in the Department of Defense (DOD) and the military service policies. Table 12 illustrates selected DOD and military service policies that include requirements for training on mild TBI in the deployed setting. Finally, Table 13 outlines the policies that require counseling for servicemembers. In addition to the contact name above, Karin Wallestad, Assistant Director, Deitra H. Lee, Analyst-In-Charge, Priyanka Sethi Bansal, Carolyn Fitzgerald, Krister Friday, Q. Akbar Husain, and Jennifer Rudisill made key contributions to this report. Also contributing were Christine Davis, Cynthia Grant, Dae Park, Vikki Porter, Steven Putansu, and James Whitcomb.
The Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 contains a provision that GAO examine the effect of PTSD, TBI, and certain other conditions on separations for misconduct. This report examines (1) the number of servicemembers separated for misconduct who were diagnosed with PTSD, TBI, or certain other conditions and were potentially ineligible for VA benefits and services; (2) the extent to which military services' policies to address the impact of PTSD and TBI on separations for misconduct are consistent with DOD's policies; (3) the extent to which Army and Marine Corps have adhered to their policies; and (4) the extent to which DOD, Army, and Marine Corps monitor adherence to the policies. GAO analyzed DOD data; reviewed applicable policies; interviewed DOD, Army, Marine Corps, Air Force, and Navy officials; visited two Army and one Marine Corps sites selected on factors such as separation rates; and reviewed a nongeneralizable sample of Army and Marine Corps servicemember misconduct separation documents. GAO's analysis of Department of Defense (DOD) data show that 62 percent, or 57,141 of the 91,764 servicemembers separated for misconduct from fiscal years 2011 through 2015 had been diagnosed within the 2 years prior to separation with post-traumatic stress disorder (PTSD), traumatic brain injury (TBI), or certain other conditions that could be associated with misconduct. Specifically, 16 percent had been diagnosed with PTSD or TBI, while the other conditions, such as adjustment and alcohol-related disorders, were more common. Of the 57,141 servicemembers, 23 percent, or 13,283, received an “other than honorable” characterization of service, making them potentially ineligible for health benefits from the Department of Veterans Affairs (VA). GAO found that the military services' policies to address the impact of PTSD and TBI on separations for misconduct are not always consistent with DOD policy. For example, contrary to DOD policy, Navy policy does not require a medical examination—or screening—for certain servicemembers being separated in lieu of trial by court-martial to assess whether a PTSD or TBI diagnosis is a mitigating factor in the misconduct charged. This type of separation occurs when a servicemember facing a trial by court-martial requests, and is approved, to be discharged administratively. In addition, GAO found that two of the four military services have TBI training polices that are inconsistent with DOD policy. GAO also found that the Army and Marine Corps may not have adhered to their own screening, training, and counseling policies related to PTSD and TBI. For example, GAO found that 18 of the 48 nongeneralizable sample separation packets reviewed for Marine Corps servicemembers administratively separated for misconduct lacked documentation showing that the servicemember had been screened for PTSD and TBI. During interviews with Army officers, GAO found that some officers may not have received training to identify mild TBI symptoms, despite Army policy that all servicemembers should be trained. Further, GAO found instances in which both Army and Marine Corps may not have adhered to their counseling policies, which require that servicemembers, specifically prior to requesting separation in lieu of trial by court-martial, be counseled about their potential ineligibility for VA benefits and services. For 11 of the 48 separation packets included in GAO's analysis of Army servicemembers who requested separation in lieu of trial by court-martial, there was no documented evidence—or the evidence was unclear—as to whether the servicemembers received counseling. Finally, while Army and Marine Corps have some available data on servicemembers' screenings, training, and counseling, the military services do not use these data to routinely monitor whether they are adhering to relevant policies. Federal internal control standards call for agencies to establish monitoring activities to ensure internal control systems and evaluate results. Without monitoring adherence to these policies, the military services cannot provide assurance that servicemembers with PTSD and TBI are receiving adequate consideration of their conditions as well as the services DOD has established for them. GAO is making five recommendations, including that DOD direct the Air Force and Navy to address inconsistencies in their screening and training policies and ensure that the military services monitor adherence to their screening, training, and counseling policies. DOD agreed with four of GAO's recommendations, but did not agree to address inconsistencies in training policies. GAO maintains inconsistencies should be addressed, as discussed in the report.
NAGPRA requires federal agencies to (1) identify their Native American human remains, funerary objects, sacred objects, and objects of cultural patrimony, (2) try and determine if a cultural affiliation exists with a present day Indian tribe or Native Hawaiian organization, and (3) generally repatriate the culturally affiliated items to the applicable Indian tribe(s) or Native Hawaiian organization(s) under the terms and conditions prescribed in the act. NAGPRA covers five types of Native American cultural items (see table 1). NAGPRA’s requirements for federal agencies, museums, and the Secretary of the Interior, particularly the ones most relevant to their historical collections, which were the focus of our July 2010 report, include the following: Compile an inventory and establish cultural affiliation. Section 5 of NAGPRA requires that each federal agency and museum compile an inventory of any holdings or collections of Native American human remains and associated funerary objects that are in its possession or control. The act requires that the inventories be completed no later than 5 years after its enactment—by November 16, 1995—and in consultation with tribal government officials, Native Hawaiian organization officials, and traditional religious leaders. In the inventory, agencies and museums are required to establish geographic and cultural affiliation to the extent possible based on information in their possession. Cultural affiliation denotes a relationship of shared group identity which can be reasonably traced historically or prehistorically between a present day Indian tribe or Native Hawaiian organization and an identifiable earlier group. Affiliating NAGPRA items with a present day Indian tribe or Native Hawaiian organization is the key to deciding to whom the human remains and objects should be repatriated. If a cultural affiliation can be made, the act requires that the agency or museum notify the affected Indian tribes or Native Hawaiian organizations no later than 6 months after the completion of the inventory. The agency or museum was also required to provide a copy of each notice—known as a notice of inventory completion—to the Secretary of the Interior for publication in the Federal Register. The items for which no cultural affiliation can be made are referred to as culturally unidentifiable. Compile a summary of other NAGPRA items. Section 6 of NAGPRA requires that each federal agency and museum prepare a written summary of any holdings or collections of Native American unassociated funerary objects, sacred objects, or objects of cultural patrimony in its possession or control, based on the available information in their possession. The act requires that the summaries be completed no later than 3 years after its enactment—by November 16, 1993. Preparation of the summaries was to be followed by federal agency consultation with tribal government officials, Native Hawaiian organization officials, and traditional religious leaders. After a valid claim is received by an agency or museum, and if the other terms and conditions in the act are met, a notice of intent to repatriate must be published in the Federal Register before any item identified in a summary can be repatriated. Repatriate culturally affiliated human remains and objects. Section 7 of NAGPRA and its implementing regulations generally require that, upon the request of an Indian tribe or Native Hawaiian organization, all culturally affiliated NAGPRA items be returned to the applicable Indian tribe or Native Hawaiian organization expeditiously—but no sooner than 30 days after the applicable notice is published in the Federal Register—if the terms and conditions prescribed in the act are met. NAGPRA assigns certain duties to the Secretary of the Interior, which are carried out by the National NAGPRA Program Office (National NAGPRA) within NPS. In accordance with NAGPRA’s implementing regulations, National NAGPRA has developed a list of Indian tribes and Native Hawaiian organizations for the purposes of carrying out the act. The list is comprised of federally recognized tribes, Native Hawaiian organizations, and, at various points in the last 20 years, corporations established pursuant to the Alaska Native Claims Settlement Act (ANCSA). Since the enactment of two recognition laws in 1994, BIA has regularly published a comprehensive list of recognized tribes—commonly referred to as the list of federally recognized tribes—that federal agencies are supposed to use to identify federally recognized tribes. The recognition of Alaska Native entities eligible for the special programs and services provided by the United States to Indians because of their status as Indians has been controversial. Since a 1993 legal opinion by the Solicitor of the Department of the Interior, BIA’s list of federally recognized tribes has not included any ANCSA group, regional, urban, and village corporations. Finally, NAGPRA requires the establishment of a committee to monitor and review the implementation of inventory, identification and repatriation activities under the act. Among other things, the Review Committee is responsible for, upon request, reviewing and making findings related to the identity or cultural affiliation of cultural items or the return of such items and facilitating the resolution of any disputes among Indian tribes, Native Hawaiian organizations, and federal agencies or museum relating to the return of such items. We refer to these findings, recommendations and facilitation of disputes that do not involve culturally unidentifiable human remains simply as disputes; the Review Committee also makes recommendations regarding the disposition of culturally unidentifiable human remains. The NAGPRA Review Committee was established in 1991. The NMAI Act sections 11 and 13 generally require the Smithsonian to (1) inventory the Indian and Native Hawaiian human remains and funerary objects in its possession or control, (2) identify the origins of the Indian and Native Hawaiian human remains and funerary objects using the “best available scientific and historical documentation,” and (3) upon request repatriate them to lineal descendants or culturally affiliated Indian tribes and Native Hawaiian organizations. As originally written, the act did not set a deadline for the completion of these tasks, but amendments in 1996 added a June 1, 1998, deadline for the completion of inventories. The 1996 amendments also require the Smithsonian to prepare summaries for unassociated funerary objects, sacred objects, and objects of cultural patrimony by December 31, 1996. The NMAI Act uses the same definitions as NAGPRA for unassociated funerary objects, sacred objects, and objects of cultural patrimony, but the NMAI Act does not define human remains and it does not use the term associated funerary objects. Instead, the NMAI Act requires Indian funerary objects—which it defines as objects that, as part of the death rite or ceremony of a culture, are intentionally placed with individual human remains, either at the time of death or later—to be included in inventories and unassociated funerary objects to be included in summaries. The Smithsonian has identified two museums that hold collections subject to the NMAI Act: the National Museum of the American Indian and the National Museum of Natural History. Final repatriation decisions for the American Indian Museum are made by its Board of Trustees and the Secretary of the Smithsonian has delegated responsibility for making final repatriation decisions for the Natural History Museum to the Smithsonian’s Under Secretary for Science. According to Smithsonian officials, when new collections are acquired, the Smithsonian assigns an identification number—referred to as a catalog number—to each item or set of items at the time of the acquisition or, in some cases, many years later. A single catalog number may include one or more human bones, bone fragments, or objects, and it may include the remains of one or more individuals. All of this information is stored in the museums’ electronic catalog system, which is partly based on historical paper card catalogs. Generally, each catalog number in the electronic catalog system includes basic information on the item or set of items, such as a brief description of the item, where the item was collected, and when it was taken into the museum’s collection. Since the NMAI Act was enacted, the Smithsonian has identified approximately 19,780 catalog numbers that potentially include Indian human remains (about 19,150 within the Natural History Museum collections and about 630 within the American Indian Museum collections). Finally, like NAGPRA, the NMAI Act requires the establishment of a committee to monitor and review the inventory, identification, and return of Indian human remains and cultural objects. The Smithsonian Review Committee was established in 1990 for this purpose. As we reported in July 2010, federal agencies have not yet fully complied with all of the requirements of NAGPRA. Specifically, we found that while the eight key federal agencies generally prepared their summaries and inventories on time, they had not fully complied with other NAGPRA requirements. In addition, we found that while the NAGPRA Review Committee had conducted a number of activities to fulfill its responsibilities under NAGPRA, its recommendations have had mixed success. Furthermore, while National NAGPRA has taken several actions to implement the act’s requirements, in some cases it has not effectively carried out its responsibilities. Finally, although the key agencies have repatriated many NAGPRA items, repatriation activity has generally not been tracked or reported governmentwide. The eight key federal agencies we reviewed in our July 2010 report generally prepared their summaries and inventories by the statutory deadlines, but the amount of work put into identifying their NAGPRA items and the quality of the documents prepared varied widely. Of these eight agencies, the Corps, the Forest Service, and NPS did the most extensive work to identify their NAGPRA items, and therefore they had the highest confidence level that they had identified all of them and included them in the summaries and inventories that they prepared. In contrast, relative to these agencies, we determined that BLM, BOR, and FWS were moderately successful in identifying their NAGPRA items and including them in their summaries and inventories, and BIA and TVA had done the least amount of work. As a result, these five agencies had less confidence that they had identified all of their NAGPRA items and included them in summaries and inventories. In addition, not all of the culturally affiliated human remains and associated funerary objects had been published in a Federal Register notice as required. For example, at the time of our report, BOR had culturally affiliated 76 human remains but had not published them in a Federal Register notice. All of the agencies acknowledged that they still have additional work to do and some had not fully complied with NAGPRA’s requirement to publish notices of inventory completion for all of their culturally affiliated human remains and associated funerary objects in the Federal Register. As a result of these findings, we recommended the agencies develop and provide to Congress a needs assessment listing specific actions, resources, and time needed to complete the inventories and summaries required by NAGPRA. We further recommended that the agencies develop a timetable for the expeditious publication in the Federal Register of notices of inventory completion for all remaining Native American human remains and associated funerary objects that have been culturally affiliated in inventories. The Departments of Agriculture and the Interior and TVA agreed with our recommendations. For example, Interior stated that this effort is under way in most of its bureaus and that it is committed to completing the process. It added that one of the greatest challenges to completing summaries and inventories of all NAGPRA items is locating collections and acquiring information from the facilities where the collections are stored. We found that the NAGPRA Review Committee, to fulfill its responsibilities under NAGPRA, had monitored federal agency and museum compliance, made recommendations to improve implementation, and assisted the Secretary in the development of regulations. As we reported, the committee’s recommendations to facilitate the resolution of disposition requests involving culturally unidentifiable human remains have generally been implemented (52 of 61 requests has been fully implemented). In disposition requests, parties generally agreed in advance to their preferred manner of disposition and, in accordance with the regulations, came to the committee to complete the process and obtain a final recommendation from the Secretary. In contrast to the amicable nature of disposition requests, disputes are generally contentious, and we found that the NAGPRA Review Committee’s recommendations have had a low implementation rate. Specifically, of the 12 disputes that we reviewed, the committee’s recommendations were fully implemented for 1 dispute, partially implemented in 3 others, not implemented for 5, and the status of 3 cases is unknown. Moreover, we found that some actions recommended by the committee exceeded NAGPRA’s scope, such as recommending repatriation of culturally unidentifiable human remains to non-federally recognized Indian groups. However, we found that the committee, National NAGPRA, and Interior officials had since taken steps to address this issue. We reported that National NAGPRA had taken several actions to help the Secretary carry out responsibilities under NAGPRA. For example, National NAGPRA had published federal agency and museum notices in the Federal Register; increasing this number in recent years, while reducing the backlog of notices awaiting publication. Furthermore, it had administered a NAGPRA grants program that from fiscal years 1994 through 2009 resulted in 628 grants awarded to Indian tribes, Native Hawaiian organizations, and museums totaling $33 million. It had also administered the nomination process for NAGPRA Review Committee members. Overall, we found that most of the actions performed by National NAGPRA were consistent with the act, but we identified concerns with a few actions. Specifically, National NAGPRA had developed a list of Indian tribes for the purposes of carrying out NAGPRA, but at various points in the last 20 years the list had not been consistent with BIA’s policy or an Interior Solicitor legal opinion analyzing the status of Alaska Native villages as Indian tribes. As a result, we recommended that National NAGPRA, in conjunction with Interior’s Office of the Solicitor, reassess whether ANCSA corporations should be considered as eligible entities for the purposes of carrying out NAGPRA. Interior agreed with this recommendation and, after our report was issued, Interior’s Office of the Solicitor issued a memorandum in March 2011 stating that NAGPRA clearly does not include Alaska regional and village corporations within its definition of Indian tribes and that the legislative history confirms that this was an intentional omission on the part of Congress. The memorandum also states that while the National NAGPRA Program’s list of Indian tribes for purposes of NAGPRA must not include ANCSA regional and village corporations, National NAGPRA is currently bound by its regulatory definition of Indian tribe that contradicts the statutory definition by including ANCSA corporations. Because of this, the Solicitor suggests that the regulatory definition be changed as soon as feasible, followed by a corresponding change in the list. We also found that National NAGPRA did not always properly screen nominations for the NAGPRA Review Committee and, in 2004, 2005, and 2006, inappropriately recruited nominees for the committee, in one case recommending the nominee to the Secretary for appointment. As a result, we recommended that the Secretary of the Interior direct National NAGPRA to strictly adhere to the nomination process prescribed in the act and, working with Interior’s Office of the Solicitor as appropriate, ensure that all NAGPRA Review Committee nominations are properly screened to confirm that the nominees and nominating entities meet statutory requirements. Interior agreed with this recommendation, stating that the committee nomination procedures were revised in 2008 to ensure full transparency and that it will ask the Solicitor’s Office to review these procedures. In July 2010 we reported that while agencies are required to permanently document their repatriation activities, they are not required to compile and report that information to anyone. Of the federal agencies that have published notices of inventory completion, we determined that only three have tracked and compiled agencywide data on their repatriations—the Forest Service, NPS, and the Corps. These three agencies, however, along with other federal agencies that have published notices of inventory completion, do not regularly report comprehensive data on their repatriations to National NAGPRA, the NAGPRA Review Committee, or Congress. Through data provided by these three agencies, along with our survey of other federal agencies, we found that federal agencies had repatriated a total of 55 percent of human remains and 68 percent of associated funerary objects that had been published in notices of inventory completion as of September 30, 2009. Agency officials identified several reasons why some human remains and associated funerary objects had not been repatriated, including the lack of repatriation requests from culturally affiliated entities, repatriation requests from disputing parties, a lack of reburial sites, and a lack of financial resources to complete the repatriation. Federal agencies had also published 78 notices of intent to repatriate that covered 34,234 unassociated funerary objects, sacred objects, or objects of cultural patrimony. Due to a lack of governmentwide reporting, we recommended the Secretaries of Agriculture, Defense, and the Interior and the Chief Executive Officer of the Tennessee Valley Authority direct their cultural resource management programs to report their repatriation data to National NAGPRA on a regular basis, but no less than annually, for each notice of inventory completion they have or will publish. Furthermore, we recommended that National NAGPRA make this information readily available to Indian tribes and Native Hawaiian organizations and that the NAGPRA Review Committee publish the information in its annual report to Congress. The Departments of Agriculture and the Interior and TVA agreed with this recommendation, and Interior stated that its agencies will work toward completing an annual report beginning in 2011. In our May 2011 report we found that the Smithsonian Institution still had much work remaining with regard to the repatriation activities required by the NMAI Act. Specifically, we found that while the American Indian and Natural History Museums generally prepared summaries and inventories within the statutory deadlines the process that the Smithsonian relies on is lengthy and resource intensive. Consequently, after more than 2 decades, the museums have offered to repatriate the Indian human remains in only about one-third of the catalog numbers identified as possibly including such remains since the act was passed. In addition, we found that the Smithsonian established a Review Committee to meet the statutory requirements, but limited its oversight of repatriation activities. Finally, we found that while the Smithsonian has repatriated most of the human remains and many of the objects that it has offered for repatriation, it has no policy on how to address items that are culturally unidentifiable. We found that while the American Indian and Natural History Museums had generally prepared summaries and inventories within the deadlines established in the NMAI Act, their inventories and the process they used to prepare them have raised questions about their compliance with some of the act’s statutory requirements. The first question was the extent to which the museums prepared their inventories in consultation and cooperation with traditional Indian religious leaders and government officials of Indian tribes, as required by the NMAI Act. Section 11 of the act directs the Secretary of the Smithsonian, in consultation and cooperation with traditional Indian religious leaders and government officials of Indian tribes, to inventory the Indian human remains and funerary objects in the possession or control of the Smithsonian and, using the best available scientific and historical documentation, identify the origins of such remains and objects. However, the Smithsonian generally began the consultation process with Indian tribes after the inventories from both museums were distributed. The Smithsonian maintains that it is in full compliance with the statutory requirements for preparing inventories and that section 11 does not require that consultation occur prior to the inventory being completed. The second question is the extent to which the Natural History Museum’s inventories—which were finalized after the 1996 amendments—identified geographic and cultural affiliations to the extent practicable based on available information held by the Smithsonian, as required by the amendments. The museum’s inventories generally identified geographic and cultural affiliations only where such information was readily available in the museum’s electronic catalog. However, the Smithsonian states that it does not interpret section 11 as necessarily requiring that the inventory and identification process to occur simultaneously, and therefore it has adopted a two-step process to fulfill section 11’s requirements. The legislative history of the 1996 amendments provides little clear guidance concerning the meaning of section 11. However, we also found that the two-step process that the Smithsonian has adopted is a lengthy and resource-intensive one and that, at the pace that the Smithsonian is applying this process, it will take several more decades to complete this effort. As a result of the identification and inventory process the Smithsonian is using, since the passage of the NMAI Act in 1989 through December 2010, the Smithsonian estimates that it has offered to repatriate approximately one-third of the estimated 19,780 catalog numbers identified as possibly including Indian human remains. The American Indian Museum had offered to repatriate human remains in about 40 percent (about 250) of its estimated 630 catalog numbers. The Natural History Museum had offered to repatriate human remains in about 25 percent (about 5,040) of its estimated 19,150 catalog numbers containing Indian human remains. In some cases, through this process, the Smithsonian did not offer to repatriate human remains and objects because it determined that they could not be culturally affiliated with a tribe. The congressional committee reports accompanying the 1989 act indicate that the Smithsonian estimated that the identification and inventory of Indian human remains as well as notification of affected tribes and return of the remains and funerary objects would take 5 years. However, more than 21 years later, these efforts are still under way. In light of this slow progress, we suggested that Congress may wish to consider ways to expedite the Smithsonian’s repatriation process including, but not limited to, directing the Smithsonian to make cultural affiliation determinations as efficiently and effectively as possible. In May 2011, we reported that the Smithsonian Review Committee had conducted numerous activities to implement the special committee provisions in the NMAI Act, but its oversight and reporting activities have been limited. For example, we found that contrary to the NMAI Act, the committee does not monitor and review the American Indian Museum’s inventory, identification, and repatriation activities, although it does monitor and review the Natural History Museum’s inventory, identification, and repatriation activities. Although the law does not limit the applicability of the Smithsonian Review Committee to the Natural History Museum, the Secretary established a committee to meet this requirement in 1990 that oversees only the Natural History Museum’s repatriation activities and is housed within that museum. Although the Smithsonian believes Congress intended to limit the committee’s jurisdiction to the Natural History Museum, the statutory language and its legislative history do not support that view. The Smithsonian provided several reasons to support this contention but, as we reported in May 2011, these reasons are unpersuasive. Therefore, we recommended that the Smithsonian’s Board of Regents direct the Secretary of the Smithsonian to expand the Smithsonian Review Committee’s jurisdiction to include the American Indian Museum, as required by the NMAI Act, to improve oversight of Smithsonian repatriation activities. With this expanded role for the committee, we further recommended that the Board of Regents and the Secretary should consider where the most appropriate location for the Smithsonian Review Committee should be within the Smithsonian’s organizational structure. The Smithsonian agreed with this recommendation, stating that the advisory nature of the committee could be expanded to include consultation with the American Indian Museum. In our May 2011 report, we also found that neither the Smithsonian nor the Smithsonian Review Committee submits reports to Congress on the progress of repatriation activities at the Smithsonian. Although section 12 of the NMAI Act requires the Secretary, at the conclusion of the work of the committee, to so certify by report to Congress, there is no annual reporting requirement similar to the one required for the NAGPRA Review Committee. As we stated earlier, in 1989, it was estimated that the Smithsonian Review Committee would conclude its work in about 5 years and cease to exist at the end of fiscal year 1995. Yet the committee’s monitoring and review of repatriation activities at the Natural History Museum has been ongoing since the committee was established in 1990. As a result, we recommended that the Board of Regents, through the Secretary, direct the Smithsonian Review Committee to report annually to Congress on the Smithsonian’s implementation of its repatriation requirements in the NMAI Act. The Smithsonian agreed with this recommendation, stating that it will submit, on a voluntary basis, annual reports to Congress. The Smithsonian further stated that although the format and presentation are matters to be discussed internally, it intends to use the National NAGPRA report as a guide and framework for its discussion and report. Finally, during our review of the Smithsonian Review Committee activities we determined that no independent administrative appeals process exists to challenge the Smithsonian’s cultural affiliation and repatriation decisions, in the event of a dispute. As a result, we recommended that the Board of Regents establish an independent administrative appeals process for Indian tribes and Native Hawaiian organizations to appeal decisions to either the Board of Regents or another entity that can make binding decisions for the Smithsonian Institution to provide tribes with an opportunity to appeal cultural affiliation and repatriation decisions made by the Secretary and the American Indian Museum’s Board of Trustees. The Smithsonian agreed with this recommendation, stating that it will review its dispute resolution procedures, with the understanding that the goal is to ensure that claimants have proper avenues to seek redress from Smithsonian repatriation decisions, including a process for the review of final management determinations. In May 2011 we reported that the Smithsonian estimates that, of the items it has offered for repatriation, as of December 31, 2010, it has repatriated about three-quarters (4,330 out of 5,980) of the Indian human remains, about half (99,550 out of 212,220) of the funerary objects, and nearly all (1,140 out of 1,240) sacred objects and objects of cultural patrimony. Some items have not been repatriated for a variety of reasons, including tribes’ lack of resources, cultural beliefs, and tribal government issues. In addition, we found that, in the inventory and identification process, the Smithsonian determined that some human remains and funerary objects were culturally unidentifiable. In some of those cases it did not offer to repatriate the items and it does not have a policy on how to undertake the ultimate disposition of such items. Specifically, our report found that according to Natural History Museum officials about 340 human remains and about 310 funerary objects are culturally unidentifiable. The NMAI Act does not discuss how the Smithsonian should handle human remains and objects that cannot be culturally affiliated, and neither museum’s repatriation policies describe how they will handle such items. In contrast, a recent NAGPRA regulation that took effect in May 2010 requires, among other things, federal agencies and museums to consult with federally recognized Indian tribes and Native Hawaiian organizations from whose tribal or aboriginal lands the remains were removed before offering to transfer control of the culturally unidentifiable human remains. Although Smithsonian officials told us that the Smithsonian generally looks to NAGPRA and the NAGPRA regulations as a guide to its repatriation process, where appropriate, in a May 2010 letter commenting on the NAGPRA regulation on disposition of culturally unidentifiable remains, the Directors of the American Indian and Natural History Museums cited overall disagreement with the regulation, suggesting that it “favors speed and efficiency in making these dispositions at the expense of accuracy.” Nevertheless, in our May 2011 report, we recommended that the Smithsonian’s Board of Regents direct the Secretary and the American Indian Museum’s Board of Trustees to develop policies for the Natural History and American Indian Museums for the handling of items in their collections that cannot be culturally affiliated to provide for a clear and transparent repatriation process. The Smithsonian agreed with this recommendation, stating that both the American Indian and Natural History Museums, in the interests of transparency, are committed to developing policies in this regard and that such policies will give guidance to Native communities and the public as to how the Smithsonian will handle and treat such remains. In conclusion, Chairman Akaka, Vice Chairman Barrasso, and Members of the Committee, our two studies clearly show that while federal agencies and the Smithsonian have made progress in identifying and repatriating thousands of Indian human remains and objects, after 2 decades of effort, much work still remains to be done to address the goals of both NAGPRA and the NMAI Act. In this context, we believe that it is imperative for the agencies to implement our recommendations to ensure that the requirements of both acts are met and that the processes they employ to fulfill the requirements are both efficient and effective. This concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact Anu K. Mittal at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Jeffery D. Malcolm, Assistant Director; Mark Keenan; and Jeanette Soares also made key contributions to this statement. In addition, Allison Bawden, Pamela Davidson, Emily Hanawalt, Cheryl Harris, Catherine Hurley, Rich Johnson, Sandra Kerr, Jill Lacey, Anita Lee, Ruben Montes de Oca, David Schneider, John Scott, Ben Shouse, and Maria Soriano also made key contributions to the reports on which this statement is based. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Museum of the American Indian Act of 1989 (NMAI Act), as amended in 1996, generally requires the Smithsonian Institution to inventory and identify the origins of its Indian and Native Hawaiian human remains and objects placed with them (funerary objects) and repatriate them to culturally affiliated Indian tribes upon request. According to the Smithsonian, two of its museums--the American Indian and the Natural History Museums--have items that are subject to the NMAI Act. The Native American Graves Protection and Repatriation Act (NAGPRA), enacted in 1990, includes similar requirements for federal agencies and museums. The National NAGPRA office, within the Department of the Interior's National Park Service, facilitates the governmentwide implementation of NAGPRA. Each act requires the establishment of a committee to monitor and review repatriation activities. GAO's testimony is based on its July 2010 report on NAGPRA implementation (GAO-10-768) and its May 2011 report on Smithsonian repatriation (GAO-11-515). The testimony focuses on the extent to which key federal agencies have complied with NAGPRA's requirements and the extent to which the Smithsonian has fulfilled its repatriation requirements. GAO found that almost 20 years after NAGPRA was enacted, eight key federal agencies with significant historical collections--Interior's Bureau of Indian Affairs (BIA), Bureau of Land Management, Bureau of Reclamation, U.S. Fish and Wildlife Service and National Park Service; Agriculture's U.S. Forest Service; the U.S. Army Corps of Engineers; and the Tennessee Valley Authority--have not fully complied with the requirements of the act. All of the agencies acknowledged that they still have additional work to do and some have not fully complied with NAGPRA's requirement to publish notices of inventory completion for all of their culturally affiliated human remains and associated funerary objects in the Federal Register. In addition, GAO found two areas of concern with the National NAGPRA office's activities. First, National NAGPRA had developed a list of Indian tribes for the purposes of carrying out NAGPRA that was inconsistent with BIA's official list of federally recognized tribes and an Interior legal opinion. Second, National NAGPRA did not always screen nominations for NAGPRA Review Committee positions properly. GAO found that repatriations were generally not tracked or reported governmentwide. However, based on GAO's compilation of federal agencies' repatriation data, through September 30, 2009, federal agencies had repatriated 55 percent of the human remains and 68 percent of the associated funerary objects that had been published in notices of inventory completion. The relevant agencies agreed with the recommendations in both reports and GAO is making no new recommendations at this time.
FDA is responsible for protecting public health by ensuring the safety and efficacy of medical products marketed in the United States—including drugs, medical devices, and biologics—and the safety of nearly all food products other than meat and poultry, regardless of whether they were manufactured domestically or overseas. The agency’s responsibilities for overseeing food and medical products are divided among the following five FDA product centers, each responsible for specific types of products:  The Center for Biologics Evaluation and Research (CBER) is responsible for regulating biologics for human use, such as blood, blood products, vaccines, and allergenic products, and ensuring that biologics are safe and effective.  The Center for Devices and Radiological Health (CDRH) is responsible for regulating firms that manufacture and import medical devices and for ensuring that radiation-emitting products, such as lasers and x-ray systems, meet radiation safety standards.  The Center for Drug Evaluation and Research (CDER) is responsible for regulating over-the-counter and prescription drugs for human use, including generic drugs.  The Center for Food Safety and Applied Nutrition (CFSAN) is responsible for ensuring the safety of most foods for humans (except meat and poultry and processed egg products, which are regulated by the U.S. Department of Agriculture), including dietary supplements.  The Center for Veterinary Medicine (CVM) is responsible for regulating the manufacture and distribution of drugs, devices, and food given to, or used by animals. Among other things, the centers monitor the safety and effectiveness of marketed medical products and the safety of food, formulate regulations and guidance, conduct research, communicate information to industry and the public, and set their respective program priorities. In addition to the work of the five centers, FDA’s Office of Regulatory Affairs (ORA) conducts field work for the product centers to promote compliance with agency requirements and applicable laws. ORA field activities include inspecting domestic and foreign manufacturing facilities, examining products offered for import, collecting and analyzing samples, and taking enforcement action. ORA’s Office of Criminal Investigations is responsible for investigating potential criminal violations involving FDA- regulated products and may refer cases to the Department of Justice for prosecution. FDA’s Office of the Commissioner is responsible for providing leadership and direction to the product centers and ORA. FDA’s Office of International Programs is responsible for leading, managing, and coordinating all of FDA’s international activities and its recently established overseas offices. In July 2011, FDA created “directorates” that align similar functions under common leadership within the Office of the Commissioner—the Office of Medical Products and Tobacco, which oversees CBER, CDER, and CDRH, as well as the Center for Tobacco Products; the previously established Office of Foods, which oversees CFSAN and CVM; and the Office of Global Regulatory Operations and Policy, which oversees ORA and the Office of International Programs. In recent years, we have reported on a variety of concerns related to FDA’s resource management, strategic planning, and internal communications and coordination. Specifically, in June 2009, we found that FDA was unable to provide complete and reliable estimates of its resource needs for its medical products. In February 2010, we reported on management challenges the agency faces and FDA’s difficulties in using practices for effective strategic and workforce planning. Coordinating internally among its centers and offices and externally with outside experts were among the agency’s major management challenges. Also, in September 2010, we reported on FDA’s overseas offices and the need for better coordination among the centers. For a list of these and other related reports, see Related GAO Products at the end of this report. The Federal Food, Drug, and Cosmetic Act prohibits the introduction of adulterated food, drugs, and medical devices into interstate commerce. However, the act does not define or use the term “economic adulteration” or “economically motivated adulteration.” The act includes, but is not limited to, the following:  A food is deemed to be adulterated if, among other circumstances, it bears or contains any added poisonous or deleterious substance that may render it injurious to health. A food is also deemed to be adulterated (1) if any valuable constituent has been omitted in whole or in part, or (2) if any substance has been substituted wholly or in part, or (3) if damage or inferiority has been concealed in any manner, or (4) if any substance has been added so as to increase its bulk or weight, or reduce its quality or strength, or make it appear better or of greater value than it is.  A drug is deemed to be adulterated if it purports to be a drug whose name is recognized in an official compendium and its strength differs from, or its quality or purity falls below, the standards set forth in such compendium. If a drug does not purport to be a drug listed in an official compendium, it is deemed to be adulterated if its strength differs from, or its purity or quality falls below, that which it purports to possess. A drug is also deemed to be adulterated if, among other circumstances, any substance has been (1) mixed or packed with it so as to reduce its quality or strength or (2) substituted wholly or in part.  A device is deemed to be adulterated if it is, or purports to be or is represented as, a device which is subject to a performance standard established or recognized under the act unless such device is in all respects in conformity with such standard. It is also deemed adulterated if, among other circumstances, the device was not manufactured, packed, stored, or installed in conformity with good manufacturing practices. Economic adulteration is not a new problem and ranges from simple actions, such as adding material to increase a product’s weight, to more sophisticated substitutions or additions that are designed to avoid detection by tests known to be used to authenticate ingredients or products. Economic adulteration differs from other forms of intentional adulteration, such as bioterrorism or sabotage, whose primary purpose is to cause harm. Because economic adulteration is intentional, it also differs from unintentional adulteration, such as adulteration through failure to follow good manufacturing practices. Although the primary driver of economic adulteration is financial gain rather than causing harm, it can pose a variety of public health risks. The actual risks will vary depending on the adulterant used, the person who consumes the product, and the length of use or exposure. There is a direct and immediate threat to public health when the adulterant is a toxic or lethal substance, as was the case in the melamine and heparin incidents. There are also risks that arise as a result of long-term, low- dosage exposure to a contaminant or as a result of using a product whose nutritional value or efficacy has been compromised by an adulterant. Certain populations, such as infants, the elderly, and persons with compromised immune systems are particularly vulnerable to these risks. In some cases, an adulterant may only pose a public health risk for those who are allergic to it, such as fish substituted with a less expensive fish to which a person is allergic. Furthermore, economic adulteration that poses no known health risk may expose a vulnerability in the supply chain—the network of handlers, suppliers, and middlemen involved in the production of food and drugs—that could be further exploited in the future, with serious consequences. Following the melamine and heparin incidents, FDA formed an internal work group focused on predicting and addressing what the agency referred to as “economically motivated adulteration.” The work group, comprising representatives from FDA’s food and medical product centers and ORA, held a May 2009 public meeting on the topic. For purposes of the meeting, FDA developed a working definition of economically motivated adulteration. The meeting, attended by representatives of academia, industry, and consumer groups, was designed to raise awareness about the potential for this problem and gather information on how to better predict, prevent, and address it. According to FDA officials, the work group stopped meeting shortly after the public meeting was held. FDA made a transcript of the meeting publicly available, but issued no report. FDA primarily approaches economic adulteration as part of its broader efforts to detect and prevent adulteration of food and medical products in general. In addition, CDER, ORA, CFSAN, and CBER have undertaken efforts specific to economic adulteration, while CVM and CDRH have not. However, agency entities have missed opportunities to communicate and coordinate efforts specifically directed at economic adulteration and identify potential public health risks. According to FDA officials, the agency primarily approaches economic adulteration as part of its broader efforts to combat adulteration in general. Such efforts include, for example, the agency’s actions to ensure the safety of imported products. According to FDA officials, these broad efforts to combat adulteration could also combat economic adulteration. Agency officials noted that the Federal Food, Drug, and Cosmetic Act does not distinguish among motives or require motive to be established to determine whether a product is adulterated. FDA adopted a working definition of economically motivated adulteration for the purposes of discussing the topic at its May 2009 public meeting. In its written comments on our draft report, HHS told us that the recently formed FDA Working Group on Economically Motivated Adulteration will use the working definition proposed at the public meeting, enabling FDA centers to focus their discussions and encouraging communication and collaboration. According to an FDA official, the agency generally does not expend resources to distinguish between economic and other motives for adulteration. Rather, when the agency detects any form of adulteration that poses an adverse public health effect, it can conduct an investigation, request a recall to get the product off the market, and take enforcement action. A senior FDA official told us there is value in making a distinction between economic adulteration and other forms of adulteration to guide the agency’s thinking about how to be more proactive in addressing this issue. Examples of broader FDA efforts to address adulteration include:  ORA’s Predictive Risk-Based Evaluation for Dynamic Import Compliance Targeting (PREDICT). This tool generates a numerical risk score for all FDA-regulated products by analyzing importers’ shipment information using sets of FDA-developed risk criteria based in part on publicly available information, which may indicate opportunities for economic adulteration. PREDICT then targets for examination products that have a high risk score. As of September 2011, PREDICT was operating in ports of entry in 13 of 16 FDA districts, and FDA officials said the agency expects PREDICT to be operational in all ports of entry by the end of 2011.  CVM’s Pet Event Tracking Network (PETNet). In August 2011, CVM launched PETNet, a secure, Internet-based network comprised of FDA and other federal and state agencies with authority over pet food that would allow them to exchange real-time information about outbreaks of illness in animals associated with pet food and other pet food-related incidents. PETNet members can elect to receive alerts about pet food incidents and create alerts when they are aware of a pet food incident within their jurisdiction. According to FDA, the information would be used to help federal and state regulators determine how best to use inspectional and other resources to either prevent or quickly limit the adverse events caused by adulterated pet food. Use of the system is voluntary.  CDER’s Secure Supply Chain Pilot Program. This program, which is in the process of being implemented, is intended to help the agency ensure the safety of imported drugs by enabling it to focus its resources on preventing the importation of drugs that do not comply with applicable FDA requirements. The program is intended to allow a limited number of drug companies to import their products on an expedited basis if, among other things, they can meet FDA criteria showing that they maintain control over their products from manufacture through entry into the United States. FDA expects to announce the date on which it will begin accepting applications for the pilot by the end of 2011. In addition to these broader efforts, some FDA entities have undertaken efforts specific to economic adulteration. For example, in the aftermath of the melamine and heparin incidents, CDER, ORA, CFSAN, and CBER have taken the following steps to specifically address economic adulteration:  CDER has developed a model to rank the 1,387 active pharmaceutical ingredients (API) known to be in current use according to their susceptibility to economic adulteration. According to CDER officials, the ranking model incorporates various risk factors, such as estimates for volume of use, cost per unit of the API, and reliance on testing methods to check quality that are known to be less accurate than more modern methods developed for other APIs. CDER officials told us the center sampled and tested 20 of the 77 higher- ranked APIs in 2010 and found no evidence of any significant contamination suggesting intentional adulteration. According to agency officials, after this pilot program is completed, FDA will determine if the program was valuable and, if so, whether the model’s risk factors may need to be adjusted.  CDER is leading efforts to work with United States Pharmacopeia (USP) to focus on the vulnerability of drugs to economic adulteration. USP is a nonprofit organization that sets standards for medicines, food ingredients, and dietary supplements. USP’s drug standards are enforceable under the Federal Food, Drug, and Cosmetic Act. Actions CDER officials say they have taken include selecting 20 USP standards for updating that include certain over-the-counter drugs, inactive ingredients used in high volume, and APIs that use outdated technology or for which there are no procedures to identify impurities. The goal of this modernization effort is to replace outdated USP standards with more modern, accurate, and specific ones. CDER has also worked closely with USP in revising the heparin testing standard and the standards for glycerin and five other similar drug product ingredients to prevent economic adulteration with diethylene glycol, a cheaper, but deadly ingredient often substituted for glycerin.  ORA, along with the Department of Homeland Security and the Department of Agriculture’s Food Safety and Inspection Service, contracted in 2010 with the University of Minnesota’s National Center for Food Protection and Defense to model risk factors for improved detection of economic adulteration. The contract consists of three phases: (1) a survey of U.S. companies to collect information on prior or potential economic adulteration experiences and identify characteristics of potential targets of economic adulteration; (2) the development of strategies to group test methods to identify those methods that pose the greatest potential risk for economic adulteration, including the level of technical sophistication required to exploit the test method; and (3) the development of supply chain models in order to identify shifts in these supply chains that may indicate the potential for economic adulteration.  CFSAN formed a work group on the economic adulteration of food, which started meeting in February 2008. CFSAN officials said the group, which includes representatives from CVM and ORA, generally meets monthly and is looking at the impact of economic adulteration on food safety and whether there is other work that FDA could undertake to mitigate that impact. Among other things, the group has proposed creating a page on FDA’s website on economic adulteration, and it has developed a methodology for the testing of pomegranate juice, which officials said they chose to focus on because it is expensive and because its health benefits have been widely touted. CFSAN officials said the group is also looking at ways to make industry more comfortable with providing information to FDA on possible economic adulteration.  CFSAN has a number of efforts under way to develop analytical tests and tools for detecting economic adulteration. For example, the center has developed a method for analyzing nitrogen-containing compounds similar to melamine that might be used to boost apparent nitrogen content in milk and other protein products. CFSAN’s research office has a project to develop methods to detect the adulteration of powdered milk products and fruit juices. One element of this project involves the creation of a library of powdered milk signatures, against which new samples can be compared (and adulterants identified) using modern statistical methods. This project is slated for completion over the next 2 years.  CBER established a process in late 2008 to extract relevant product component information from regulatory applications and input this data into a database. The center has since expanded its process of extracting product component information from applications to include, for example, ingredients that may be subject to contamination. The database includes nonproprietary, unique ingredient identifiers and other information designed to facilitate faster identification of products made from components suspected of being economically adulterated. In contrast to the other entities, senior CVM officials we spoke with said that although the center has broad initiatives designed to prevent and detect adulteration in general (i.e., PETNet), CVM has undertaken no efforts targeted to economic adulteration and has no plans to do so. Officials said that the melamine incident gave them greater awareness that products with high-value ingredients could be susceptible to economic adulteration and that this was the only lesson they learned from the melamine incident. Officials said they recognize that CVM-regulated products may be vulnerable to economic adulteration because they are composed of numerous byproducts, any one of which could be adulterated. Nevertheless, they said that they do not believe economic adulteration is a growing problem because of industry’s overall awareness of its supply chain through efforts such as verifying certificates of analysis of ingredients from suppliers. Officials from CDRH told us that, other than its broader efforts to combat adulteration in general, the center had no initiatives specifically directed at addressing economic adulteration, but indicated they are responsible for products that are vulnerable to economic adulteration. For example, CDRH officials said that they have found that a manufacturer of imported sunglass lenses may have been substituting inferior material. However, center officials were unaware of any actual cases of economic adulteration involving products for which they are responsible. We found two instances in which CVM did not know about or participate in efforts on economic adulteration that involved CVM-regulated products. First, the director of the University of Minnesota’s National Center for Food Protection and Defense told us that, as part of the center’s contract with ORA, it will be drafting a list of foods at high risk of economic adulteration and that the list will likely include foods that are also used as animal feed ingredients. The director noted that, with the exception of certain kinds of fats, the global supply chain for animal food and feed is the same as that for human foods. The director said that, for this reason, his center had considered finding ways to make its work for FDA even more applicable to animal feed. Although CVM provided developmental input, direction, and technical support with regard to the contract, CVM officials said they were not aware of the center’s work under the contract to develop this list of high-risk foods. Second, CFSAN has a research project that focuses primarily on developing methods for authenticating protein-based foods and ingredients, detecting the presence of adulterants, and identifying chemical hazards in protein-based products. Among other things, this project is to develop methods for screening skim milk powder, which can be found in both food and animal feed, for the presence of soy or other vegetable protein. Senior CVM officials said they were unaware of this research project, but they stated that CVM has been involved in developing methods to identify contaminants of protein-based ingredients. We also found an instance where FDA entities engaged in similar efforts on economic adulteration but did not communicate or coordinate about those efforts. Specifically, as we mentioned earlier, ORA and CDER are engaged in similar efforts to determine which human foods and drugs, respectively, are at greatest risk for economic adulteration. However, according to ORA and CDER officials, they have not coordinated those efforts or communicated about them, even though they are using some of the same risk factors in their efforts—including price fluctuations and reliance on less specific test methods. Officials from both entities said that such communication and coordination could be beneficial to both efforts. In addition, we have previously identified internal coordination—among FDA’s centers and offices—as one of the agency’s major management challenges based on a review of evaluations of FDA by HHS and the FDA Science Board, among others. Also, in our 2009 survey of FDA managers, 70 percent reported that better internal coordination and communication would greatly improve their ability to contribute to FDA’s goals and responsibilities, though 28 percent reported that FDA was making great progress in this area. Furthermore, we asked FDA managers in our survey to identify the top priorities that FDA leadership should address to achieve agency goals and responsibilities, and the second most commonly identified issue was improving coordination within FDA. In detailed written responses in our survey, some managers noted that better coordination among FDA’s centers could increase effectiveness and decrease redundancy. Furthermore, a recommendation made by FDA’s work group on economic adulteration in August 2009 related to communication—that FDA designate a lead office and develop standard operating procedures for information sharing—was not implemented. A senior FDA official told us that there has been some work across FDA centers on economic adulteration but that the centers did not see a lot of value in additional coordination because of the differences between the products each center oversees. However, the issue of economic adulteration cuts across the agency, and without communicating about and coordinating on economic adulteration efforts, FDA may not be making the best use of scarce resources. In August 2011, FDA officials told us that the agency’s Compliance Policy Council, which consists of senior representatives of ORA and the FDA centers, met in July 2011 and discussed whether and how the agency should coordinate work on economic adulteration. The council directed risk management staff from ORA and the centers to form a group to discuss opportunities to share intelligence and approaches to economic adulteration and then report back to the council. According to FDA officials, the proposed agenda included discussion about the development of standard operating procedures. In its written comments on our draft report, HHS told us that the work group held its first meeting on September 23, 2011, while our report was at the agency for comment. The Commissioner and other senior FDA officials have often spoken publicly about the threat posed by economic adulteration. In its July 2011 report entitled Pathway to Global Product Safety and Quality, FDA stated that globalization has fundamentally altered the economic and security landscape, requiring FDA to transform itself into a global agency prepared to regulate in an environment in which product safety and quality know no borders. The report also called economically motivated harms perhaps the most serious challenge on the horizon for the agency and noted that the heparin and melamine incidents underscore how serious the potential danger can be. The report also noted that FDA needs to move beyond its current efforts and think strategically across the agency. However, FDA officials told us that the Office of the Commissioner has not issued specific written guidance on how FDA centers and offices should approach or address their economic adulteration efforts. The Office of the Commissioner’s role is to provide policy making, program direction, coordination, liaison, and expert advice for agency programs. According to federal standards for internal control, agencies should have documented policies and procedures in place to carry out management’s directives. This documentation should be readily available for examination in management directives, administrative policies, or operating manuals in paper or electronic form. In addition, the federal standards call for effective communication, with information flowing down, across, and up the organization. FDA officials and stakeholders we interviewed cited several key challenges the agency faces in detecting and preventing economic adulteration, and stakeholders identified options for enhancing the agency’s efforts to address economic adulteration. FDA officials and stakeholders told us that responding to increased globalization and the expanding complexity of the supply chains for both food and medical products is a key challenge in addressing economic adulteration. Globalization has led to an increase in the variety, complexity, and volume of imported food and drugs, which complicates FDA’s task of ensuring their safety. In addition to globalization, an increase in supply chain complexity—the growth in the networks of handlers, suppliers, and middlemen—also complicates FDA’s task. According to FDA, the market for outsourcing portions of pharmaceutical production has more than doubled in the past 9 years. FDA noted in its July 2011 Pathway report that more products are following increasingly complex paths through multi-step supply chains before reaching the United States. Figure 1 illustrates the complex supply chain of a single commodity, canned tuna. As the figure shows, after the tuna is caught in East Asia, it can travel through many countries for processing and canning before the finished product finally reaches store shelves in the United States. FDA officials gave several reasons that this increasing complexity poses a challenge. For example, CFSAN officials told us that food companies can change ingredients and suppliers at will without having to notify FDA of those changes, making it difficult to track or trace an ingredient back to its source or supplier. However, many food manufacturers are required to keep records of the immediate previous sources of all foods received. Similarly, CDER officials said that it is increasingly difficult to trace ingredients through drug supply chains due to the increasing number of parties involved and the increase in transfers between parties in other countries. Stakeholders from associations representing the food and medical product industries agreed that the large number of imported ingredients and foreign establishments, as well as the difficulties related to tracking an ingredient back to the original source, are of particular concern. FDA officials and stakeholders said that obtaining information on potential instances of economic adulteration is critical to addressing the problem, but they also agreed that the agency faces challenges in gathering such information from industry. Industry may be a source of information on potential incidents of adulteration because companies regularly test ingredients from suppliers. The responsible party for a firm that introduces into commerce an article of food containing an adulterated ingredient that could cause serious adverse health consequences or death must report this information to FDA through the Reportable Food Registry. However, agency officials and industry representatives said industry is often reluctant to share such information when an adulterated ingredient has not entered into commerce. For example, a company may be concerned that it could provoke a lawsuit if it reported a supplier for intentionally adulterating products and the accusation was subsequently determined to be unfounded. They said that a wrongful accusation can have serious consequences, such as compromising the integrity of the company’s brands and products if certain information became public. In addition to a need for more information about industry suppliers, FDA officials told us that they need more information about substances that could be used to adulterate products. These officials said that new, more precise testing methods need to be developed to detect these adulterants because some current tests are outdated or insufficiently specific. Recent cases of melamine contamination in pet food illustrate the need for such tests. The presence of melamine in pet food was not initially discovered by the standard test for protein because that test was designed to detect nitrogen and could not distinguish between protein and melamine. The contamination was ultimately discovered when FDA scientists developed a specific test to identify melamine. FDA and others determined that melamine was apparently selected as an adulterant to evade the original testing and increase the apparent protein content. CDER officials also told us that it is difficult to detect instances of economic adulteration because the potential adulterant is often unknown or has not yet been identified. For example, during the heparin incident, the available test methods for heparin were not able to detect the contaminant oversulfated chondroitin sulfate. FDA collaborated with scientists outside the agency to identify the contaminant and develop new tests to detect it. Industry may be the best source of tests to detect adulteration because companies develop such tests to monitor the products they receive from their suppliers; however, industry officials indicated that they are often reluctant to share such information because it is proprietary. Stakeholders cited additional challenges that FDA faces in addressing economic adulteration, including its legal authorities. For example, one stakeholder said that FDA does not have the authority to accredit, or approve third parties to inspect establishments that make drugs; the stakeholder said that if FDA did have that authority, such inspections may help decrease FDA’s inspection workload and could increase the total number of facilities inspected. FDA recently received authority to recognize, in certain situations, accreditation bodies that may then accredit qualified third parties to inspect food establishments. The FDA Food Safety Modernization Act provides that, no later than January 2013, FDA is to establish a program to recognize these accreditation bodies. It is worth noting, though, that FDA has had the authority to accredit third parties to conduct inspections of certain domestic and foreign medical device manufacturing establishments since 2002. FDA implemented its accreditation programs, permitting eligible establishments to voluntarily request inspections from third-party organizations, but relatively few establishments have chosen to take advantage of this program. Some stakeholders also told us that FDA’s limited resources, including staffing, present a challenge. Specifically, they said FDA has limited ability to investigate potentially economically adulterated products because such investigations are resource-intensive. They also told us that FDA does not have the range of expertise among staff that is needed to address economic adulteration, in particular staff with a background in intelligence gathering or law enforcement. We have previously reported on FDA’s own concerns about its staffing levels and oversight responsibilities for certain activities, such as its oversight of medical devices and inspections of establishments that manufacture approved drugs. Some stakeholders supported increased oversight by FDA, in particular, as an option to obtain more information on supply chains—information that is useful in tracing the source of economic adulteration. For example, one stakeholder suggested that the use of track-and-trace technology— such as using standard numerical identifiers on prescription drug packages—could facilitate FDA’s oversight of the supply chain by making it easier for FDA and industry to trace adulterated ingredients back to the point of contamination. Under the new FDA Food Safety Modernization Act, the Secretary of HHS, acting through FDA, is directed to establish a system that will improve its ability to rapidly track and trace both domestic and imported foods. Similarly, the Food and Drug Administration Amendments Act of 2007 required FDA to develop a unique device identifier system to adequately identify a medical device through distribution and use. According to FDA officials, the agency expects to publish a proposed rule on the establishment of this system by the end of 2011. Many stakeholders also suggested that FDA increase its regulatory and enforcement actions to address economic adulteration. These stakeholders said that public health risk should be FDA’s priority in taking such actions, but many also told us that FDA should pursue those who adulterate for economic gain, including in instances that may not have a large negative public health impact. For example, some stakeholders suggested building criminal cases against those who adulterate for economic gain and prosecuting them swiftly and visibly to help ensure that companies are complying with laws and regulations. In addition, these stakeholders said that, even when the adulteration has little health impact, such actions could help protect public health by deterring future instances, some of which may pose a significant health threat. Depending on the circumstances, such as the type of violation and product involved, a range of enforcement actions or penalties could be pursued. However, in February 2009, we reported that FDA has taken few actions in pursuing instances of economic fraud in seafood. In that report, we found that FDA did not issue any regulatory letters to companies regarding seafood fraud from 2005 through 2008, and according to a senior FDA official, the agency had not taken any enforcement actions for seafood fraud since 2000. Even with the challenges related to the disclosure of proprietary information, stakeholders also suggested that greater communication with industry could enhance FDA efforts to gather information on economic adulteration. One option for greater communication that several stakeholders identified was the creation of an information clearinghouse, through which companies could anonymously share information on adulterated ingredients with FDA and other companies. Stakeholders noted that the clearinghouse could enhance FDA’s ability to disseminate information on adulterated products quickly, facilitate secure information sharing across industries, and enable FDA and industry to respond more rapidly to potential instances of adulteration. For example, they said that a clearinghouse could allow the sharing of information, such as information on market price fluctuations, environmental disasters, or other macroeconomic factors. In the view of these stakeholders, this type of information may help both industry and FDA better target their efforts to detect and prevent economic adulteration. One stakeholder said that such a clearinghouse was an opportunity for industry and FDA to share information from various sources in a central location, which would help them draw conclusions about the authenticity of ingredients or raw materials. This stakeholder suggested that if an information clearinghouse had existed prior to the heparin incident, it could have contained critical information—such as the sudden increase or decrease in the price of ingredients for food or drugs —to alert FDA and industry to the potential for adulteration. One stakeholder noted that because some of the industries affected by economic adulteration are small, some companies might easily be identified by the information reported, even if they reported it anonymously. Consequently, some stakeholders suggested engaging a neutral third party to operate the information clearinghouse, thus helping to ensure that the information shared was free of specific company identifiers. FDA officials said that they are examining various ways to facilitate information sharing with industry and have discussed the idea of a clearinghouse, but they have no plans to develop one. In addition to formal information sharing, some stakeholders suggested more informal interaction between industry and FDA. Stakeholders noted that increased dialogue could provide opportunities for FDA to communicate to industry its overall strategy on economic adulteration. Some stakeholders told us that FDA’s communication during adverse public health events was clear and timely but that at other times they were unsure what FDA was doing to address potential economic adulteration. Some stakeholders expressed a willingness to work with FDA on the issue but said that they need to better understand FDA’s expectations of industry. For example, one stakeholder suggested a forum where FDA officials can talk to industry directly and engage in dialogue to clarify the agency’s strategy. Some stakeholders from food industry groups also said that they believe the recent passage of the FDA Food Safety Modernization Act provides new opportunities for both FDA and industry to address economic adulteration. One stakeholder noted that the new law may give FDA more opportunities to include economic adulteration in its inspection program. In addition, stakeholders told us that they believe the law provides a science- and risk-based approach for companies to verify their ingredient suppliers, including multiple ways of assuring the public and FDA that industry has processes in place to detect economic adulteration. Specifically, under the act, certain facilities are required to identify reasonably foreseeable hazards and to prepare written control plans that illustrate reasonable approaches to looking for intentional adulteration. Lastly, one stakeholder said that FDA may need additional authority to require the drug industry to provide the agency with information critical to securing the medical product supply chain. Additional authority may include, for example, allowing FDA to require enhanced documentation from industry on its supply chains to increase transparency. In its comments on one of our recent reports, HHS also mentioned legislation previously under consideration by Congress that it believed would, if enacted, provide FDA with helpful tools to further secure the nation’s drug supply chain. For example, according to the agency, the proposed legislation would have provided FDA authority to require foreign and domestic drug manufacturers to implement quality systems and adopt plans to identify and mitigate hazards. In its comment letter, FDA said that such legislation could ensure that the agency can hold industry accountable for the security and integrity of its supply chains and quality control systems. Economic adulteration is not a new problem. It can undermine confidence in the safety of the nation’s food and medical products and have significant economic consequences for industry. The recent crises involving the contamination of pet food with melamine and the adulteration of heparin with oversulfated chondroitin sulfate showed that economic adulteration continues to be a problem and can have serious public health consequences. Senior FDA officials, including the Commissioner, have often spoken publicly about the threat posed by economic adulteration. However, FDA does not have a definition of economic adulteration. Without such a definition, when FDA detects adulteration, it is more difficult for the agency to make a distinction between economic adulteration and other forms of adulteration to guide the agency’s thinking about how to be more proactive about this issue. In addition, FDA has not provided guidance to its centers and offices on how they should approach or address their economic adulteration efforts. This is not consistent with federal standards of internal control, which state that agencies should have documented policies and procedures in place to carry out management’s directives. Some entities have undertaken efforts that specifically focus on economic adulteration, but they have not always communicated or coordinated their efforts with other FDA entities. Without such communication and coordination, in these times of economic uncertainty, FDA may not be making the best use of its scarce resources. As food and medical product supply chains become increasing global and complex, economic adulteration will continue to remain a threat. To enhance FDA’s efforts to combat the economic adulteration of food and medical products, we recommend that the Commissioner of FDA take the following three actions:  adopt a working definition of economic adulteration,  provide written guidance to agency centers and offices on the means of addressing economic adulteration, and  enhance communication and coordination of agency efforts on economic adulteration. We provided a draft of this report to HHS for review and comment. We received written comments from HHS, which are reproduced in appendix II. HHS neither agreed nor disagreed with our recommendations. In its comments, HHS stated that FDA views the term “economically motivated adulteration” as describing a subset of cases within the broader concept of adulteration, and believes that a holistic approach toward understanding and addressing adulteration generally is the best course forward. HHS also said that this approach will best serve the agency as it strives to protect the health and well-being of the American people by preventing, detecting, and taking appropriate responses to all adulterations of food and medical products. As we note in our report, however, agency entities have missed opportunities to communicate and coordinate efforts specifically directed at economic adulteration and identify potential public health risks. At the same time, FDA said that it recognizes the importance of sharing and leveraging information relevant to economically motivated adulteration and the utility of a mechanism for facilitating such sharing and collaboration at FDA. The department provided additional information in its written comments on planned actions of FDA’s Working Group on Economically Motivated Adulteration that are consistent with two of the three recommendations we made in our draft report. The additional comments related to our recommendations that FDA adopt a working definition of economic adulteration and enhance communication and coordination of agency efforts on economic adulteration are as follows:  Adopt a working definition of economic adulteration. HHS stated that the Working Group on Economically Motivated Adulteration will use the working definition of economically motivated adulteration that FDA proposed at its May 2009 public meeting on the topic.  Enhance communication and coordination of agency efforts on economic adulteration. HHS stated that FDA expects the efforts of the working group will result in enhanced collaboration and communication at FDA on ways to approach and address situations of economically motivated adulteration. We have included this additional information in our report. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Lisa Shames at (202) 512-3841 or shamesl@gao.gov or Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the approaches the Food and Drug Administration (FDA) uses to detect and prevent economic adulteration of food and medical products, and (2) the challenges, if any, FDA faces in detecting and preventing economic adulteration and stakeholder views on options for FDA to enhance its efforts to address economic adulteration. For this report, we define economic adulteration as “the fraudulent, intentional substitution or addition of a substance in a product for the purpose of increasing the apparent value of the product or reducing the cost of its production, i.e., for economic gain. includes dilution of products with increased quantities of an already-present substance (e.g., increasing inactive ingredients of a drug with a resulting reduction in strength of the finished product, or watering down of juice) to the extent that such dilution poses a known or possible health risk to consumers, as well as the addition or substitution of substances in order to mask dilution.” Our definition of economic adulteration is the same as the working definition of “economically motivated adulteration” that FDA developed for a May 2009 public meeting to raise awareness and solicit input on the topic. We did not include counterfeiting of a finished product because counterfeiting concerns the unauthorized use of intellectual property rights. To determine the approaches FDA uses to detect and prevent economic adulteration of food and medical products, we interviewed officials from the five FDA centers responsible for food and medical products, including the Center for Food Safety and Applied Nutrition, the Center for Veterinary Medicine, the Center for Drug Evaluation and Research, the Center for Devices and Radiological Health, and the Center for Biologics Evaluation and Research, as well as FDA’s Office of Regulatory Affairs, Office of International Programs, and Office of the Commissioner. We also interviewed former FDA officials and representatives of organizations that have been assisting FDA in its efforts to detect and prevent economic adulteration, including the United States Pharmacopeia, the University of Minnesota’s National Center for Food Protection and Defense, and New Mexico State University’s Center for Animal Health, Food Safety and Bio- Security. We reviewed relevant FDA documents, including regulations, compliance manuals and inspection guides, sampling surveillance results, statements and presentations by agency officials, a contract to fund a research project at the National Center for Food Protection and Defense, and communications with industry and the public. We also reviewed published information from FDA, including its Strategic Priorities 2011- 2015 report, its Pathway to Global Product Safety and Quality report, and Federal Register notices. We also reviewed previous GAO reports and recommendations on FDA’s oversight of food and medical products, as well as the agency’s strategic planning efforts. We compared FDA’s efforts to address economic adulteration with federal standards for internal control. To determine the challenges FDA faces in detecting and preventing economic adulteration, we interviewed and obtained the views of FDA officials and stakeholders about the challenges the agency faces in addressing economic adulteration. Stakeholders included members of academia and representatives of industry and consumer groups who made presentations at FDA’s May 2009 meeting on economically motivated adulteration, as well as former FDA officials who were involved in agency efforts that led to that meeting. We also interviewed and obtained the views of the stakeholders on options for FDA to enhance its efforts to address economic adulteration. The views of these stakeholders are not representative of and cannot be generalized to all stakeholders. In addition, we reviewed FDA and stakeholder documents related to challenges and options, as well as portions of the FDA Food Safety Modernization Act. We conducted this performance audit from September 2010 to October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Jose Alfredo Gomez (Assistant Director), Geraldine Redican-Bigott (Assistant Director), Cheryl Williams (Assistant Director), Kevin Bray, Mollie Hertel, Sherrice Kerns, Susan Malone, Michael Rose, Cynthia Saunders, Ben Shouse, and Kiki Theodoropoulos made key contributions to this report. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Drug Safety: FDA Faces Challenges Overseeing the Foreign Drug Manufacturing Supply Chain. GAO-11-936T. Washington, D.C.: Sept. 14, 2011. Medical Devices: FDA Should Enhance Its Oversight of Recalls. GAO-11-468. Washington, D.C.: June 14, 2011. Seafood Safety: FDA Needs to Improve Oversight of Imported Seafood and Better Leverage Limited Resources. GAO-11-286. Washington, D.C.: April 14, 2011. Federal Food Safety Oversight: Food Safety Working Group Is a Positive First Step but Governmentwide Planning Is Needed to Address Fragmentation. GAO-11-289. Washington, D.C.: March 18, 2011. Food Labeling: FDA Needs to Reassess Its Approach to Protecting Consumers from False or Misleading Claims. GAO-11-102. Washington, D.C.: January 14, 2011. Food and Drug Administration: Response to Heparin Contamination Helped Protect Public Health; Controls That Were Needed for Working With External Entities Were Recently Added. GAO-11-95. Washington, D.C.: October 29, 2010. Drug Safety: FDA Has Conducted More Foreign Inspections and Begun to Improve Its Information on Foreign Establishments, but More Progress Is Needed. GAO-10-961. Washington, D.C.: September 30, 2010. Food and Drug Administration: Overseas Offices Have Taken Steps to Help Ensure Import Safety, but More Long-Term Planning Is Needed. GAO-10-960. Washington, D.C.: September 30, 2010. Food Safety: FDA Could Strengthen Oversight of Imported Food by Improving Enforcement and Seeking Additional Authorities. GAO-10-699T. Washington, D.C.: May 6, 2010. Food and Drug Administration: Opportunities Exist to Better Address Management Challenges. GAO-10-279. Washington, D.C.: February 19, 2010. Food Safety: Agencies Need to Address Gaps in Enforcement and Collaboration to Enhance Safety of Imported Food. GAO-09-873. Washington, D.C.: September 15, 2009. Food and Drug Administration: FDA Faces Challenges Meeting Its Growing Medical Products Responsibilities and Should Develop Complete Estimates of Its Resource Needs. GAO-09-581. Washington, D.C.: June 19, 2009. Seafood Fraud: FDA Program Changes and Better Collaboration among Key Federal Agencies Could Improve Detection and Prevention. GAO-09-258. Washington, D.C.: February 19, 2009. Dietary Supplements: FDA Should Take Further Actions to Improve Oversight and Consumer Understanding. GAO-09-250. Washington, D.C.: January 29, 2009.
In recent years, the United States experienced public health crises suspected to have been caused by the deliberate substitution or addition of harmful ingredients in food and drugs--specifically melamine in pet food and oversulfated chondroitin sulfate in the blood thinner heparin. These ingredients were evidently added to increase the apparent value of these products or reduce their production costs, an activity GAO refers to as economic adulteration. The Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS), has responsibility for protecting public health by ensuring the safety of a wide range of products that are vulnerable to economic adulteration. This report examines (1) the approaches that FDA uses to detect and prevent economic adulteration of food and medical products and (2) the challenges FDA faces in detecting and preventing economic adulteration and views of stakeholders on options for FDA to enhance its efforts to address economic adulteration. GAO reviewed FDA documents and interviewed FDA officials and stakeholders from academia and industry, among others.. FDA primarily approaches economic adulteration as part of its broader efforts to combat adulteration in general, such as efforts to ensure the safety of imported products. Agency officials noted that the Federal Food, Drug, and Cosmetic Act does not distinguish among motives or require motive to be established to determine whether a product is adulterated. However, a senior FDA official told GAO that there is value in making a distinction between economic adulteration and other forms of adulteration to guide the agency's thinking about how to be more proactive in addressing this issue. An FDA official told GAO when the agency detects any form of adulteration that poses an adverse public health effect, it can conduct an investigation, request a recall to get the product off the market, and take enforcement action. In addition to these broader efforts, some FDA entities also have undertaken efforts that specifically focus on economic adulteration. For example, FDA's Office of Regulatory Affairs has contracted with a research center to model risk factors for improved detection of economic adulteration of food. However, FDA entities have not always communicated or coordinated their economic adulteration efforts. For example, FDA's Center for Veterinary Medicine was unaware of and did not participate in two other entities' economic adulteration efforts involving products the veterinary center regulates. FDA officials and stakeholders GAO interviewed cited several key challenges to detecting and preventing economic adulteration, including increased globalization and lack of information from industry. Globalization has led to an increase in the variety, complexity, and volume of imported food and drugs, which complicates FDA's task of ensuring their safety. In addition to globalization, an increase in supply chain complexity--the growth in the networks of handlers, suppliers, and middlemen--also complicates FDA's task, making it difficult to trace an ingredient back to its source. FDA officials and stakeholders also said that gathering information from industry, such as information on potentially adulterated ingredients, presents challenges for FDA in detecting and preventing economic adulteration due to industry's reluctance to share such information because it is proprietary. Stakeholders cited greater oversight and information sharing as options to improve FDA's ability to combat economic adulteration. Specifically, some stakeholders supported increased oversight, such as the use of technology to trace adulterated ingredients back to the point of contamination, as an option to obtain more information on supply chains. Many stakeholders also suggested that FDA increase its regulatory and enforcement actions to address economic adulteration, including in instances that may not have a large negative public health impact. Stakeholders also suggested that greater communication with industry, through such means as an information clearinghouse or more informal interactions, could enhance FDA efforts to gather information on economic adulteration. GAO recommends that FDA adopt a working definition of economic adulteration, enhance communication and coordination of agency efforts, and provide guidance to agency centers and offices on the means of addressing economic adulteration. HHS neither agreed nor disagreed with GAO's recommendations, but cited planned actions related to adopting a definition and enhancing communication and coordination.
The Exon-Florio amendment to the Defense Production Act, enacted in 1988, authorized the President to investigate the impact of foreign acquisitions of U.S. companies on national security and to suspend or prohibit acquisitions that might threaten national security. The President delegated the investigative authority to the Committee on Foreign Investment in the United States, an interagency group established in 1975 to monitor and coordinate U.S. policy on foreign investment in the United States. In 1991, the Treasury Department, as chair of the Committee, issued regulations to implement Exon-Florio. The law and regulations establish a four-step process for reviewing foreign acquisitions of U.S. companies: (1) voluntary notice by the companies; (2) a 30-day review to determine whether the acquisition could pose a threat to national security; (3) a 45- day investigation to determine whether those concerns require a recommendation to the President for possible action; and (4) a presidential decision to permit, suspend, or prohibit the acquisition. In most cases, the Committee completes its review within the initial 30 days because there are no national security concerns or concerns have been addressed, or the companies and the government agree on measures to mitigate identified security concerns. In cases where the Committee is unable to complete its review within 30 days, the Committee may initiate a 45-day investigation or allow companies to withdraw their notifications. The Committee generally grants requests to withdraw. When the Committee concludes a 45-day investigation, it is required to submit a report to the President containing recommendations. If Committee members cannot agree on a recommendation, the regulations require that the report to the President include the differing views of all Committee members. The President has 15 days to decide whether to prohibit or suspend the proposed acquisition, order divestiture of a completed acquisition, or take no action. While neither the statute nor the implementing regulation defines “national security,” the statute provides the following factors to be considered in determining a threat to national security: Domestic production needed for projected national defense requirements. The capability and capacity of domestic industries to meet national defense requirements, including the availability of human resources, products, technology, materials, and other supplies and services. The control of domestic industries and commercial activity by foreign citizens as it affects the capability and capacity of the United States to meet national security requirements. The potential effects of the proposed or pending transaction on sales of military goods, equipment, or technology to any country identified under applicable law as (a) supporting terrorism or (b) a country of concern for missile proliferation or the proliferation of chemical and biological weapons. The potential effects of the proposed or pending transaction on U.S. international technological leadership in areas affecting national security. Lack of agreement among Committee members on what defines a threat to national security and what criteria should be used to initiate an investigation may be limiting the Committee’s analyses of proposed and completed foreign acquisitions. From 1997 through 2004, the Committee received a total of 470 notices of proposed or completed acquisitions, yet it initiated only 8 investigations. Some Committee member agencies, including Treasury, apply a more traditional and narrow definition of what constitutes a threat to national security—that is, (1) the U.S. company possesses export-controlled technologies or items; (2) the company has classified contracts and critical technologies; or (3) there is specific derogatory intelligence on the foreign company. Other members, including the departments of Defense and Justice, argue that acquisitions should be analyzed in broader terms. According to officials from these departments, vulnerabilities can result from foreign control of critical infrastructure, such as control of or access to information traveling on networks. Vulnerabilities can also result from foreign control of critical inputs to defense systems or a decrease in the number of innovative small businesses researching and developing new defense-related technologies. While these vulnerabilities may not pose an immediate threat to national security, they may create the potential for longer term harm to U.S. national security interests by reducing U.S. technological leadership in defense systems. For example, in reviewing a 2001 acquisition of a U.S. company, the departments of Defense and Commerce raised several concerns about foreign ownership of sensitive but unclassified technology, including the possibility of this sensitive technology being transferred to countries of concern or losing U.S. government access to the technology. However, Treasury argued that these concerns were not national security concerns because they did not involve classified contracts, the foreign company’s country of origin was a U.S. ally, or there was no specific negative intelligence about the company’s actions in the United States. In one proposed acquisition that we reviewed, disagreement over the definition of national security resulted in an enforcement provision being removed from an agreement between the foreign company and the Departments of Defense and Homeland Security. Defense had raised concerns about the security of its supply of specialized integrated circuits, which are used in a variety of defense technologies that the Defense Science Board had identified as essential to our national defense— technologies found in unmanned aerial vehicles, the Joint Tactical Radio System, and cryptography and other communications protection devices. However, Treasury and other Committee members argued that the security of supply issue was an industrial policy concern and, therefore, was outside the scope of Exon-Florio’s authority. As a result of removing the provision, the President’s authority to require divestiture under Exon- Florio has been eliminated as a remedy in the event of noncompliance. Committee members also disagree on the criteria that should be applied to determine whether a proposed or completed acquisition should be investigated. While Exon-Florio provides that the “President or the President’s designee may make an investigation to determine the effects on national security” of acquisitions that could result in foreign control of a U.S. company, it does not provide specific guidance for the appropriate criteria for initiating an investigation of an acquisition. Currently, Treasury, as Committee Chair, applies essentially the same criteria established in the law for the President to suspend or prohibit a transaction, or order divestiture: (1) there is credible evidence that the foreign controlling interest may take action to threaten national security and (2) no laws other than the International Emergency Economic Powers Act are appropriate or adequate to protect national security. However, the Defense, Justice, and Homeland Security departments have argued that applying these criteria at this point in the process is inappropriate because the purpose of an investigation is to determine whether or not a credible threat exists. Notes from a policy-level discussion of one particular case further corroborated these differing views. Committee guidelines require member agencies to inform the Committee of national security concerns by the 23rd day of a 30-day review—further compressing the limited time allowed by legislation to determine whether a proposed or completed foreign acquisition poses a threat to national security. According to one Treasury official, the information is needed a week early to meet the legislated 30-day requirement. While most reviews are completed in the legislatively required 30 days, some Committee members have found that completing a review within such short time frames can be difficult—particularly in complex cases. One Defense official said that without advance notice of the acquisition, time frames are too short to complete analyses and provide input for the Defense Department’s position. Another official said that to meet the 23-day deadline, analysts have only 3 to 10 days to analyze the acquisition. In one instance, Homeland Security was unable to provide input within the 23-day time frame. If a review cannot be completed within 30 days and more time is needed to determine whether a problem exists or identify actions that would mitigate concerns, the Committee can initiate a 45-day investigation of the acquisition or allow companies to withdraw their notifications and refile at a later date. According to Treasury officials, the Committee’s interest is to ensure that the implementation of Exon-Florio does not undermine U.S. open investment policy. Concerned that public knowledge of investigations could devalue companies’ stock, erode confidence of foreign investors, and ultimately chill foreign investment in the United States, the Committee has generally allowed and often encouraged companies to withdraw their notifications rather than initiate an investigation. While an acquisition is pending, companies that have withdrawn their notification have an incentive to resolve any outstanding issues and refile as soon as possible. However, if an acquisition has been concluded, there is less incentive to resolve issues and refile, extending the time during which any concerns remain unresolved. Between 1997 and 2004, companies involved in 18 acquisitions have withdrawn their notification and refiled 19 times. In two cases, the companies had already concluded the acquisition and did not refile until 9 months to 1 year. Consequently, the concerns raised by Defense and Commerce about potential export control issues in these cases remained unresolved for as much as a year— further increasing the risk that a foreign acquisition of a U.S. company would pose a threat to national security. We identified two cases in which companies that had concluded an acquisition before filing with the Committee withdrew their notification. In each case, the company has yet to refile. In one case, the company filed with the Committee more than a year after completing the acquisition. The Committee allowed it to withdraw the notification to provide more time to answer the Committee’s questions and provide assurances concerning export control matters. The company refiled, and was permitted to withdraw a second time because there were still unresolved issues. Four years have passed since the second withdrawal. In the second case, the company—which filed with the Committee more than 6 months after completing its acquisition—was also allowed to withdraw its notification. That was more than 2 years ago. In enacting Exon-Florio, the Congress, while recognizing the need for confidentiality, indicated a desire for insight into the process by requiring the President to report to the Congress on any transaction that the President prohibited. In response to concerns about the lack of transparency in the Committee’s process, the Congress passed the Byrd Amendment to Exon-Florio in 1992, requiring a report to the Congress if the President makes any decision regarding a proposed foreign acquisition. In 1992, another amendment also directed the President to report every 4 years on whether there is credible evidence of a coordinated strategy by one or more countries to acquire U.S. companies involved in research, development, or production of critical technologies for which the United States is a leading producer, and whether there are industrial espionage activities directed or assisted by foreign governments against private U.S. companies aimed at obtaining commercial secrets related to critical technologies. While the Byrd Amendment expanded required reporting on Committee actions, few reports have been submitted to the Congress because withdrawing and refiling notices to restart the clock limits the number of cases that result in a presidential decision. Since 1997, only two cases— both involving telecommunications systems—resulted in a presidential decision and a subsequent report to the Congress. Infrequent reporting of Committee deliberations on specific cases provides little insight into the Committee’s process to identify concerns raised during investigations and determine the extent to which the Committee has reached consensus on a case. Further, despite the 1992 requirement for a report on foreign acquisition strategies every 4 years, there has been only one report—in 1994. In conclusion, in recognition of the benefits of open investment, Exon- Florio comes into play only as a last resort. However, since that is its role, effective application in support of recognizing and mitigating national security risks remains critical. While Exon-Florio provides the Committee on Foreign Investment in the United States the latitude to address new emerging threats, the more traditional interpretation of what constitutes a threat to national security fails to fully consider the factors currently embodied in the law. Further, the practical requirement to complete reviews within 23 days to meet the 30-day legislative requirement, along with the reluctance to proceed to an investigation, limits agencies’ abilities to complete in-depth analyses. However, the alternative—allowing companies to withdraw and refile their notifications—increases the risk that the Committee, and the Congress, will lose visibility over foreign acquisitions of U.S. companies. Our report lays out several matters for congressional consideration to (1) help resolve the differing views as to the extent of coverage of Exon- Florio, (2) address the need for additional time, and (3) increase insight and oversight of the process. Further, we are suggesting that, when withdrawal is allowed for a transaction that has been completed, the Committee establish interim protections where specific concerns have been raised, specific time frames for refiling, and a process for tracking any actions being taken during a withdrawal period. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other Members of the Committee may have. - - - - - For information about this testimony, please contact Katherine V. Schinasi, Managing Director, Acquisition and Sourcing Management, at (202) 512-4841 or schinasik@gao.gov. Other individuals making key contributions to this product include Thomas J. Denomme, Allison Bawden, Gregory K. Harmon, Paula J. Haurilesko, John Van Schaik, Karen Sloan, and Michael Zola. Our understanding of the Committee on Foreign Investment in the United States’ process is based on our current work and builds on our review of the process and our discussions with agency officials for our 2002 report. For our current review, and to expand our understanding of the Committee’s process for reviewing foreign acquisitions of U.S. companies, we met with officials from the Department of Commerce, the Department of Defense, the Department of Homeland Security, the Department of Justice, and the Department of the Treasury. For prior reviews we also collected data from and discussed the issues with representatives of the Department of State, the Council of Economic Advisors, the Office of Science and Technology, and the U.S. Trade Representative. Further, we conducted case studies of nine acquisitions that were filed with the Committee between June 28, 1995, and December 31, 2004. These case studies included reviewing files containing company submissions, correspondence between the Committee and the companies’ representatives, email traffic between member agencies, and minutes of policy-level meetings attended by at various times all 12 Committee members. We selected acquisitions based on recommendations by Committee member agencies and the following criteria: (1) the Committee permitted the companies to withdraw the notification; (2) the Committee or member agencies concluded agreements to mitigate national security concerns; (3) the foreign company had been involved in a prior acquisition notified to the Committee; or (4) GAO had reviewed the acquisition for its 2002 report. We did not attempt to validate the conclusions reached by the Committee on any of the cases we reviewed. We also discussed our draft report from our current review with officials from the Department of State and the U.S. Trade Representative’s office to obtain their views on our findings. To determine whether the weaknesses in provisions to assist agencies in monitoring agreements that GAO had identified in its 2002 report had been addressed, we analyzed agreements concluded under the Committee’s authority between 2003 and 2005. We conducted our review from April 2004 through July 2005 in accordance with generally accepted government auditing standards. Office of International Investment: Coordinates policies toward foreign investments in the United States and U.S. investments abroad. International Trade Administration: Coordinates issues concerning trade promotion, international commercial policy, market access, and trade law enforcement. Defense Technology Security Administration: Administers the development and implementation of Defense technology security policies on international transfers of defense-related goods, services, and technologies. Bureau of Economic and Business Affairs: Formulates and implements policy regarding foreign economic matters, including trade and international finance and development. Criminal Division: Develops, enforces, and supervises the application of all federal criminal laws, except for those assigned to other Justice Department divisions. Information Analysis and Infrastructure Protection: Identifies and assesses current and future threats to the homeland, maps those threats against vulnerabilities, issues warnings, and takes preventative and protective action. Performs analyses and appraisals of the national economy for the purpose of providing policy recommendations to the President. Directs all trade negotiations of and formulates trade policy for the United States. Evaluates, formulates, and coordinates management procedures and program objectives within and among federal departments and agencies, and controls administration of the federal budget. Coordinates the economic policy-making process and provides economic policy advice to the President. Advises and assists the President in integrating all aspects of national security policy as it affects the United States. Provides scientific, engineering and technological analyses for the President for federal policies, plans, and programs. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The 1988 Exon-Florio amendment to the Defense Production Act authorizes the President to suspend or prohibit foreign acquisitions of U.S. companies that may harm national security, an action the President has taken only once. Implementing Exon-Florio can pose a significant challenge because of the need to weigh security concerns against U.S. open investment policy--which requires equal treatment of foreign and domestic investors. Exon-Florio's investigative authority was delegated to the Committee on Foreign Investment in the United States (Committee)--an interagency committee established in 1975 to monitor and coordinate U.S. policy on foreign investments. In September 2002, GAO reported on weaknesses in the Committee's implementation of Exon-Florio. This review further examined the Committee's implementation of Exon-Florio. Several aspects of the process for implementing Exon-Florio could be enhanced thereby strengthening the law's effectiveness. First, in light of differing views among Committee members about the scope of Exon-Florio--specifically, what defines a threat to national security, we have suggested that Congress should consider amending Exon-Florio to more clearly emphasize the factors that should be considered in determining potential harm to national security. Second, to provide additional time for analyzing transactions when necessary, while avoiding the perceived negative connotation of investigation on foreign investment in the United States we have suggested that the Congress eliminate the distinction between the 30-day review and the 45-day investigation and make the entire 75-day period available for review. Third, the Committee's current approach to provide additional time for analysis or to resolve concerns while avoiding the potential negative impacts of an investigation on foreign investment in the United Stated is to encourage companies to withdraw their notifications of proposed or completed acquisitions and refile them at a later date. Since 1997, companies involved in 18 acquisitions have been allowed to withdraw their notification to refile at a later time. The new filing is considered a new case and restarts the 30-day clock. While withdrawing and refiling provides additional time while minimizing the risk of chilling foreign investment, withdrawal may also heighten the risk to national security in transactions where there are concerns and the acquisition has been completed or is likely to be completed during the withdrawal period. We are therefore suggesting that the Congress consider requiring the Committee Chair to (1) establish interim protections where specific concerns have been raised, (2) specify time frames for refiling, and (3) establish a process for tracking any actions being taken during the withdrawal period. Finally, to provide more transparency and facilitate congressional oversight, we are suggesting that the Congress may want to revisit the criterion for reporting circumstances surrounding cases to the Congress. Currently, the criterion is a presidential decision. However, there have only been two such decisions since 1997 and thus only two reports to Congress.
The acquisition function plays a critical role in helping federal agencies fulfill their missions. Other transaction authority provides the ability to acquire cutting-edge science and technology, in part through attracting entities that typically have not pursued government contracts because of the cost and impact of complying with government procurement requirements. This authority, when used selectively, is a tool intended to help the Science and Technology Directorate leverage commercial technology to reduce the cost of homeland security items and systems. Other transaction agreements are distinct from procurement contracts, grants, or cooperative agreements because of the flexibilities that they offer to both awardees and the government. For example, they allow the federal government and awardees flexibility in negotiating intellectual property and data rights, which stipulate each party’s rights to technology developed under the agreements. The flexibility of other transaction agreements is an important characteristic to attract nontraditional contractors. We previously reported, however, that because these agreements do not have a standard structure based on regulatory guidelines, they can be challenging to create and administer. The Homeland Security Act of 2002 originally authorized DHS to carry out a 5-year pilot program to exercise other transaction authority. in 2007, other transaction authority has been extended annually through appropriations legislation. The Homeland Security Act of 2002 authorizes DHS to enter into an other transaction agreement that advances the development, testing, and evaluation of critical supports basic, applied, and advanced research and development; technologies; and carries out prototype projects. Pub. L. No. 107-296, § 831(a). business arrangements or structures that would not be feasible or appropriate under a procurement contract. Other transaction agreements for research do not require the involvement of a nontraditional contractor. One Science and Technology Directorate program funded other transaction agreements to promote homeland security by advancing the development and testing of rapid biological detectors. This “detect-to- protect” system would monitor a facility and detect the presence of biological agents in time to provide sufficient warning to facility occupants to limit their exposure (see left image in fig. 1). A different Science and Technology Directorate program used an other transaction agreement to develop and test a new high-voltage transformer that helps provide power during the recovery time following blackouts or outages resulting from severe natural disasters or terrorist attacks (see right image in fig. 1). The period of performance for other transaction agreements varies and may last longer than a traditional FAR contract. Other transaction agreements are generally structured in successive phases, so each project may have several phases. At the end of a phase, the awardees submit a statement of work and technical and cost proposals for the next phase. Continuation between phases may be based on an independent technical evaluation and is not guaranteed. Unlike FAR contracts, which are generally limited to a length of 5 years, an other transaction agreement may continue as long as funding is available and work is required under a phase. As a result, other transaction agreements can vary in length, from 3 months to over 7 years, as shown in figure 2. Further, the funding for other transaction agreements may grow over time. For example, in one other transaction agreement, the Science and Technology Directorate obligated $200,000 in the first phase with an 8- month period of performance. This agreement has been in existence for almost 7 years with at least $5.3 million obligated on it. Another two agreements were funded for $2 million at the time of award, but at completion, each other transaction agreement had obligations of approximately $100 million. The use of other transaction authority by the Science and Technology Directorate has declined since its peak in fiscal years 2005 and 2006. From fiscal year 2004 through fiscal year 2011, the Science and Technology Directorate entered into 58 other transaction agreements, totaling $583 million in obligations. Fourteen agreements remained active in fiscal year 2011 and the directorate has not entered into a new other transaction agreement since fiscal year 2010. The Science and Technology Directorate’s use of other transaction authority has declined since 2005 when it entered into 28 new agreements. Total obligations also have declined since peaking at $151 million in 2006 (see fig. 3). DHS officials offered reasons for the decline in use of other transaction authority. DHS acquisition officials told us that recently they have noted a decrease in the number of nontraditional contractors submitting proposals to use other transaction agreements while the use of FAR contracts has increased. Further, one official explained that as DHS’s requirements have changed over time, the requirements have targeted different industries. Finally, DHS officials have been uncertain about renewal of other transaction authority. For example, in fiscal year 2011, there was a gap in DHS’s other transaction authority because the continuing resolution did not extend the authority until April 2011. DHS officials explained that they were unsure if DHS had the authority to enter into new agreements or modify existing agreements under continuing resolutions. However, while use has declined, DHS officials said the flexibility provided by other transaction authority to conduct business with nontraditional contractors is still important to the directorate’s research needs. For example, one program manager explained that without other transaction authority, DHS would be required to go through a traditional contractor to reach a nontraditional contractor, and this could affect the Science and Technology Directorate’s ability to directly obtain the necessary technology. DHS has made some progress in addressing challenges and the related recommendations we previously made regarding its use of other transaction agreements (see fig. 4). DHS has faced challenges overseeing its use of other transaction authority and establishing safeguards in the following five areas: 1. developing guidance for the use of audit provisions,2. updating policies related to documentation of lessons learned, 3. identifying workforce training requirements, and 4. conducting a workforce assessment, 5. collecting relevant data on other transaction agreements. GAO-05-136. GAO-05-136. GAO-08-1088. GAO-08-1088. Based on our review of all 27 available DHS agreement files, we found three gaps in the collection and reporting of information on its use of other transaction authority: (1) DHS does not consistently document the rationale for entering into an other transaction agreement in an agreement analysis document, despite DHS guidance to do so; (2) discrepancies between DHS’s data sources result in an incomplete picture of other transaction agreements activity, including an inaccurate annual report to Congress; and (3) DHS does not track the circumstances that permit the use of other transaction authority, such as the involvement of a nontraditional contractor, through the phases of an other transaction agreement. Involving nontraditional contractors is one of the benefits of having other transaction authority, yet without knowing how many are involved or for how long, DHS is not in a position to measure the benefits of using the special acquisition authority. While DHS’s guidance requires it to document the rationale for using other transaction authority, DHS does not do this consistently. One of the following three conditions must be met to use an other transaction agreement for prototypes: (1) there is at least one nontraditional government contractor participating to a significant extent, (2) at least one-third of the total cost of a prototype project is to be paid out of funds provided by parties to the transaction other than the federal government, or (3) the DHS Chief Procurement Officer determines, in writing, that exceptional circumstances justify the use of a transaction that provides for innovative business arrangements or structures that would not be feasible or appropriate under a procurement contract. Since 2005, DHS other transaction guidance requires that this rationale be documented in an agreement analysis that should be maintained in the other transaction agreement file. We found that the agreement analysis is not consistently documented in the files. In our review of 11 other transaction for prototype agreement files, we found inconsistent documentation of the agreement analysis—7 other transaction agreement files contained an agreement analysis document for the initial award and 4 did not contain an agreement analysis document. These 4 other transaction agreement files that do not contain any agreement analysis documentation were awarded recently, from 2007 to 2009. DHS other transaction agreement officers said they rely on the agreement analysis to learn the background on agreements they have inherited from previous other transaction agreement officers. Given the high turnover of acquisition staff, the agreement analysis document is an important tool to capture information about the rationale for use of other transaction authority. Recent annual reports to Congress on other transaction activity have been incomplete. DHS is required to provide an annual report to Congress detailing the projects for which other transaction authority was used, the rationale for its use, the funds spent using the authority, the outcome of each project, and the results of any audits of such projects. We previously reported that DHS’s June 30, 2008 report to Congress did not include 14 agreements from the reporting period.current analysis of DHS’s fiscal year 2010 congressional report and other transaction agreement files, we found that DHS did not report three agreements that received funding totaling over $3.2 million in obligations, Based on our or 22 percent of other transaction obligations for the year. In addition, DHS does not include information on open agreements that did not involve the exercise of an option or the award of a new phase during the reporting period. While this information is not expressly required by the legislation, without it DHS is not providing a complete picture of the use of its authority. Based on our file review, we found the following other transaction agreements which were not reported in DHS’s 2009 and 2010 annual congressional reports (see fig. 5). For example, one open other transaction agreement, which was not reported in the fiscal year 2009 annual report, included a payment schedule with four dates during fiscal year 2009 totaling about $10 million. Without accurate information about the universe of other transaction agreements, Congress may be unable to oversee DHS’s use of its other transaction authority. Further, we found that DHS does not track information to measure the benefits of other transaction authority, which include reaching nontraditional contractors. DHS’s guidance states that the government team, which includes acquisition and program officials, should establish and track metrics that measure the value or benefits directly attributable to the use of other transaction authority. But DHS officials told us they have not established metrics. In addition, DHS does not collect information at each phase of an other transaction agreement to determine if the original circumstances permitting the use of other transaction authority still exist. Specifically, DHS does not track the involvement of a nontraditional contractor throughout the various phases of the other transaction agreement. Based on our file review, we identified 11 other transaction agreements that cited the significant contribution of a subawardee nontraditional contractor as the circumstance permitting the use of other transaction authority. However, we found that six of these agreement files did not include documentation to demonstrate that the subawardee nontraditional contractor was involved during one or more phases of the agreement. For example, one nontraditional contractor was involved as a subawardee for 14 months, but the other transaction agreement lasted 40 months. While circumstances frequently change when conducting research or prototype development, the Science and Technology Directorate does not have visibility into the impact of these changes over time. In contrast, the Department of Defense (DOD) has determined that tracking information about participants, which includes nontraditional contractors, is important to managing its other transaction authority. In its guidance, DOD requires defense agencies and military departments to report significant changes to key participants involved in the agreement. This information is used to track the number of nontraditional contractors involved in other transaction agreements, which DOD officials use to measure the benefit of its other transaction authority. Even with the recently reduced use of other transaction authority, the Science and Technology Directorate continues to identify this as an important tool that provides the flexibilities needed to develop critical technologies. However, while other transaction agreements may carry the benefit of reaching nontraditional contractors to develop and test innovative homeland security technology, they also carry the risk of reduced accountability and transparency because they are exempt from federal procurement regulations. While DHS has responded to our prior recommendations, it still faces challenges addressing our recommendation to develop a mechanism to collect and track relevant data. Without consistent information on the universe of other transaction agreements, DHS continues to report inaccurate or incomplete information on the use of its other transaction authority in its annual report to Congress. This may undermine Congress’s ability to obtain a full picture on the use of this special acquisition authority. DHS has taken steps by updating its guidance to require documentation of lessons learned; however, the guidance is not being implemented. In particular, we found that DHS does not document lessons learned from completed other transaction agreements nor has it consistently documented the agreement analysis, as required by guidance. Other transaction agreements may be in place for an extended time period and obligate a significant amount of funds, yet DHS does not have full information on other transaction activity at each phase of the award. Involving nontraditional contractors is one of the circumstances permitting the use of other transaction authority, yet without knowing how many are involved or for how long, DHS is not in a position to determine whether the continued use of the other transaction authority is still the best approach through the life of the agreement. If other transaction authority is made permanent, it is important for DHS to have complete information to understand and track its use of other transaction authority over time. To promote the efficient and effective use by DHS of its other transaction authority to meet its mission needs, we recommend that the Secretary of Homeland Security direct the Under Secretary for Management to take the following three actions: Establish an action plan with specific time frames for fully implementing the prior GAO recommendation to establish a mechanism to collect and track relevant data on other transaction agreements, including the role of the nontraditional contractor, and systematically assess the data and report to Congress. Establish an action plan with specific time frames to help ensure full implementation of DHS other transaction guidance, regarding documentation of lessons learned and documentation of the agreement analysis. Establish a policy to review and document the circumstances permitting the use of other transaction authority at each new phase, throughout the life of the agreement, to determine if the continued use of an other transaction agreement is appropriate. We provided a draft of this report to DHS for comment. In written comments, DHS agreed with our recommendations and described actions under way or planned to address them. DHS also provided technical comments, which we have incorporated into the report as appropriate. DHS’s comments are reprinted in appendix III. We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives for this report were to review (1) the Department of Homeland Security’s (DHS) Science and Technology Directorate’s use of other transaction authority, (2) the extent to which DHS has addressed challenges we previously identified with its use of the authority, and (3) the information DHS collects and reports on the use of other transaction authority. To review the Science and Technology Directorate’s use of other transaction authority, we analyzed data provided by DHS’s Office of Procurement Operations Science and Technology Acquisitions Division on all other transaction agreements it has entered into since 2004. These data included agreement award date, agreement end date, annual obligations, and information on nontraditional contractors’ roles as prime awardees or subawardees. DHS officials compiled these data from DHS’s procurement system, annual reports to Congress on other transaction authority, and hard copy agreement files. To understand reasons for its trends in use of other transaction authority, we interviewed Science and Technology Directorate program officials and DHS acquisition officials. To determine the extent to which DHS has addressed challenges with its use of other transaction authority, we drew upon prior GAO reports on DHS’s use of other transaction authority, reviewed DHS other transaction agreement policies and procedures, conducted interviews with DHS officials, and reviewed other transaction agreement files that were active on or after April 1, 2008, through September 2011. To determine the steps taken to encourage the use of audit provisions, we reviewed DHS’s May 2008 guidance and its October 2009 updated guidance, Other Transactions for Research and Prototype Projects Guide. We identified and reviewed 4 other transaction agreements that were awarded after May 2008 to determine if DHS has included audit provisions as encouraged by the guidance. To determine the steps taken by DHS to document lessons learned, we identified DHS’s policy to document lessons learned in the October 2009 guidance, reviewed the files of 10 agreements that ended after October 2009, and interviewed acquisition and program officials to determine if they had participated in lessons learned discussions or documented lessons learned, as described in the guidance. To determine the steps taken by DHS to identify and implement workforce training requirements, we reviewed DHS’s Management Directive 0771.1, Procurement Operating Procedure 311, and the associated June 2011 cancellation; we obtained acquisition official training certifications; and interviewed program staff to determine if they had attended training. We also reviewed training materials to determine if other transaction authority was covered in the contract officer technical representative training attended by program officials. To identify steps taken by DHS to conduct a workforce analysis to determine if it has the appropriate number of agreement officers to execute other transaction authority, we interviewed DHS officials and requested workforce assessments addressing DHS’s acquisition workforce. To determine whether DHS collects relevant data on other transaction authority, we analyzed information that DHS collected from its review of the hard copy files, its procurement system, and annual reports to Congress on other transaction authority. To assess the information DHS collects and reports on its use of other transaction authority, we obtained an initial list of agreements from DHS’s Office of Procurement Operations Science and Technology Acquisitions Division; reviewed annual reports to Congress on other transaction authority; and interviewed program, acquisition, and general counsel officials. We also contacted officials at the Department of Defense to understand its policies and procedures to manage its other transaction authority. We identified 28 agreements that were active on or after April 1, 2008, through September 30, 2011. We conducted an in-depth file review for 27 of these 28 agreements; DHS was unable to locate one agreement file prior to the date we drafted this report. To determine the accuracy of DHS’s annual report to Congress, we reviewed the requirements for the report and compared the information included in these reports to the requirements and information from the other transaction agreement files. To identify information documenting the circumstances permitting the use of other transaction authority, such as information collected on nontraditional contractors, we looked at several documents in the agreement files. Specifically, we reviewed preaward documentation, such as awardee proposals, determination and findings, and agreement analysis documents, and postaward documentation, such as the signed other transaction agreement, all modifications, and statements of work. We interviewed program, acquisition, and general counsel officials to determine the process for verifying a nontraditional contractor’s status and significant contribution. In analyzing DHS’s agreements, we did not independently verify a contractor’s reported status as a nontraditional contractor. We conducted this performance audit from August 2011 through May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 1 presents the information in figure 4 in a noninteractive format. In addition to the contact named above, Penny Berrier, Assistant Director; Kristin Van Wychen; Beth Reed Fritts; Anne McDonough-Hughes; John Krump; Kenneth Patton; Mary Quinlan; Roxanna Sun; Robert Swierczek; Keo Vongvanith; and Rebecca Wilson made key contributions to this report.
When DHS was created in 2002, Congress granted it special acquisition authority to use “other transaction” agreements, which are special vehicles used for research and development or prototype projects. Unlike conventional contracts, other transaction agreements offer flexibilities to reach entities that traditionally have not done business with the government. They have risks, however, because they are exempt from the Federal Acquisition Regulation and other requirements. The Homeland Security Act of 2002 required GAO to report on the use of other transactions by DHS. In 2004 and 2008, GAO reported on challenges DHS faced. This report covers (1) the DHS Science and Technology Directorate’s use of other transactions, (2) DHS’s progress in addressing challenges, and (3) the information collected on the use of the authority and reported to Congress. GAO examined all 27 available other transaction agreement files, reviewed DHS’s other transaction policies and procedures, and interviewed cognizant officials. In the last 8 years, the Department of Homeland Security’s (DHS) Science and Technology Directorate has used its special acquisition authority to enter into 58 “other transaction” agreements. Use of the authority has declined since 2005. DHS officials said the decline is due to uncertainty about the agency’s continuing authority to enter into these agreements, among other things. DHS has made progress in addressing challenges and prior GAO recommendations related to its use of other transaction agreements in five areas. GAO’s analysis of DHS’s files and reports to Congress found gaps in the collection and reporting of information on other transactions. Specifically: DHS does not consistently document the rationale for entering into an other transaction agreement in the agreement analysis document, although DHS guidance requires it to do so. Recent annual reports to Congress did not contain information on all other transaction agreements. DHS does not collect information on the circumstances that permit the use of other transaction authority throughout the life of the agreement. Without complete information about the universe of other transaction agreements, neither Congress nor DHS can have full visibility into the use of this authority. GAO recommends that DHS (1) develop an action plan with specific time frames for fully implementing GAO’s prior recommendation on data collection and congressional reporting, (2) ensure full implementation of its guidance regarding documentation, and (3) establish a policy for reviewing the circumstances that permit the use of other transaction authority throughout the life of the agreement. DHS agreed with these recommendations.
Traditionally, financing the construction of public schools has been a function of local government. Until the 1940s, only 12 states provided any financial assistance for school construction. State participation increased during the baby boom of the 1950s, when local communities needed classrooms and states had surplus revenues. Even with such increases, however, localities were mainly responsible for school facilities construction. Beginning in the 1970s, litigation in many states highlighted disparities in school districts’ ability to raise money for public education. Court decisions resulted in many states increasing funding levels and playing a larger role in lessening financial disparities between rich and poor districts. Although these decisions have pertained mainly to the state’s role in providing for instruction rather than to a focus on buildings, the past 20 years have seen a general increase in state involvement with facilities-related matters. By 1991, state funding for school facilities totaled more than $3 billion or about 20 percent of all funds used for public school construction. Increasingly, the physical condition of school buildings has become a concern in school finance litigation. In 1994, for example, the Arizona Supreme Court found the state’s school funding system unconstitutional on the basis of disparities in the condition of its schools. Also, court challenges in Texas and Ohio have focused on inequities in districts’ abilities to make capital expenditures and the importance of suitable facilities for a constitutionally acceptable education system. School finance experts expect disparities in facilities to be a continued aspect of litigation. Meanwhile, states face pressure from other rising budget expenditures, such as for health care and prisons. Forty-eight states reported participating in at least one of the three areas of state involvement in school facilities that we identified. State involvement ranged from participation in all three areas to participation in just one or none of the areas. (State-by-state involvement as reported by SEAs is summarized in app. II, table II.1.) In all, 40 states reported providing ongoing facilities funding, 44 states reported participating in technical assistance or compliance 23 states reported collecting and maintaining information about the condition of school facilities. (See fig. 1.) We characterized 13 states as having comprehensive facilities programs: Alabama, Alaska, Florida, Georgia, Hawaii, Kentucky, Maryland, Massachusetts, Minnesota, North Carolina, Ohio, South Carolina, and West Virginia. Our review of state programs addressed the extent of state involvement and did not evaluate program effectiveness. We considered programs comprehensive if they had a facilities program framework in place that provided ongoing funding, conducted a variety of technical assistance and compliance review activities, maintained current information on the condition of school buildings statewide, and had one or more FTE staff working on facilities matters. Although a total of 19 SEAs reported activities in all three areas, for some states the level of activity reported in at least one of these areas was limited in some way. For example, Pennsylvania participated in all three areas, including collecting information on the condition of facilities; however, officials reported that the information was updated only when a LEA applied for project funding. Since the interval between these updates may be as much as 20 years, the information maintained by the state may be out of date. Kentucky is an example of a state we characterized as having a comprehensive program. A facilities official reported that the SEA Division of Facilities Management provided guidance to LEAs in implementing locally developed 4-year facility plans that included detailed information on the condition of school buildings. Eight professional staff and three support staff provided the LEAs with information and guidance throughout the planning, budgeting, and building of school facilities. Staff also reviewed building plans for compliance with education specifications. The state had three funding assistance programs and reported providing about $66 million in state financial assistance for facilities in state fiscal year 1994—mostly through a $100 per student capital outlay allotment paid as part of the state foundation funding. The SEA reviewed all major LEA construction and renovation projects, whether or not state funding was used. Another 21 states reported activities in two of the three areas. Most of them provided funding and technical assistance and compliance review but did not collect and maintain information on the condition of school facilities. For example, an Indiana official reported that the state provided funding through three programs, and the SEA staff reviewed architectural plans for compliance with state education administrative codes and advised local officials on funding and other processes related to facilities planning and construction. Eight states reported participation in just one area. For example, an Illinois official reported that while the state did not have an ongoing funding program or collect condition data, the SEA facilities staff did provide technical assistance and compliance review for certain locally funded projects to correct life/safety code violations. Along with the variation among states in facilities activities and level of involvement, we also found differences in state views and traditions on the extent of the state role in providing facilities assistance. Several states reported many years of providing funding, illustrating the view that states have a role in school facilities assistance. Officials in other states expressed the view that school facilities matters are the responsibility of the local districts. A total of 40 states reported providing ongoing financial assistance to local districts for the construction of public elementary and secondary schools.Collectively, these states reported providing an estimated $3.5 billion in grants and loans for school facilities construction in state fiscal year 1994. Ten states reported no regular, ongoing programs to assist districts with construction costs, although some of these had recently provided one-time appropriations for facilities or considered proposals for funding school construction. While most states reported providing financial assistance for school construction, funding levels varied widely. On a per pupil basis, state funding provided in fiscal year 1994 ranged from a high of $2,254 per student in Alaska to a low of $6 per student in Montana (see table 2). The median amount of assistance provided per student was about $104. With the exception of Hawaii and Alaska, which provided full or nearly full state support for school construction, all states provided less than $300 per student. Eight states—Arkansas, Indiana, Michigan, Minnesota, North Dakota, Ohio, Utah, and Virginia—reported providing at least some portion of their assistance in the form of loans to districts. The following descriptions of funding programs in three states provide more context for the amount of state aid provided for school facilities. Florida has eight programs to aid school facilities: Florida has provided financial assistance for school facilities construction since 1947. Its eight funding programs for facilities assistance are funded from gross receipts from utility taxes and motor vehicle licensing tax revenues. Two programs are based on district enrollment growth relative to enrollment growth statewide; a third program provides funding for maintenance based on the square footage and age of a district’s buildings plus building replacement costs. The remaining programs target projects such as joint-use facilities, vocational-technical centers, and projects to assist districts using modified school calendars. One program targets funding to districts with limited ability to raise local revenues for facilities. In New Hampshire, facilities aid is linked to LEA consolidation: New Hampshire reimburses local districts for a percentage of their construction debt. The state contribution ranges from 30 to 55 percent and favors districts that have consolidated. Districts can receive an extra 20 percent for portions of projects attributable to the construction of kindergartens. (New Hampshire is the only state without mandatory kindergarten.) The state reimburses districts over a minimum of 5 years or the longest period of time required by the funding instruments used by the district. In Kansas, facilities aid is based on district wealth: Kansas began providing funding to local districts for school facilities in state fiscal year 1993. Depending on the assessed valuation per pupil of the school district, the state program provides aid ranging from none to a high of around 50 percent for less wealthy districts. No cap exists on the total amount of assistance the state provides. Funding is provided as an entitlement to school districts; the state pays its share of local debt service for all districts passing bond measures. Not only do funding levels vary among states in any 1 year, but construction funding can vary dramatically within states from year to year, making it difficult to capture the complete picture of state support in one snapshot. Some states supplement their regular construction funding programs from time to time with additional monies for school facilities construction. For example, a state official in New Jersey reported that in fiscal year 1993 the state made a one-time appropriation of $250 million to address health and life/safety needs in schools in addition to the regular facilities funding provided that year. In several states where we obtained data for multiple years, construction funding reported by officials increased or decreased more than 50 percent between fiscal years 1993 and 1994. These fluctuations can reflect such circumstances as changes in school construction needs or in the availability of state funding. Putting the amount of state assistance for school construction—about $3.5 billion nationwide in state fiscal year 1994—in context of total facilities expenditures is difficult because of limited data on local spending, a major part of those expenditures. When we asked state officials for this information, many reported that they did not have or collect this data. Preliminary data from the Bureau of the Census show that, counting revenues from all sources, total expenditures for school construction and purchases of land and existing buildings and improvements to buildings were about $15.7 billion for the 1991-92 school year. When we asked states whether they had any information about unmet needs for construction funding, officials from several states noted instances of facilities needs outstripping available state resources. For example, a state official in Alaska reported that in fiscal year 1994 local districts submitted requests for funding totaling $880 million for a grant program that received an appropriation of $171 million. Similarly, a state official from Wyoming noted that district requests for funding totaled $42 million in fiscal year 1995, although the state had only $13.5 million available. In contrast, officials from two states commented that they had no backlog in requests for funding. States reported using a variety of mechanisms to allocate funding for facilities, and many reported having multiple programs. Some programs provided assistance to districts requesting aid for specific construction projects; others provided each district with a fixed amount of funding per student or a proportion of available funding based on such factors as a district’s facility needs relative to facility needs statewide. Delaware exemplifies how programs can vary within a state. An official reported three funding programs: one focused on major capital projects that provides funds on a project-by-project basis accounting for district ability to pay, a second program for scheduled maintenance and repairs that distributes available funding to districts on the basis of enrollment and requires a local match, and a third program for unscheduled repairs that uses a flat rate formula including factors of building age and enrollment. While states reported using various ways to distribute funds, we found common features among the programs: Most states reported prioritizing funding toward districts with less ability to pay. While states reported using a variety of ways to prioritize which districts receive funding and how much they receive, most reported considering district ability to pay in awarding some portion of assistance. Of the 40 states providing construction funding, 34 reported programs that gave some weight to ability to pay, either through eligibility criteria, allocation formulas, or prioritization criteria. For example, Montana has restricted its debt service subsidy program to districts whose taxable property wealth per pupil was less than the statewide average. Maryland reported providing districts with a percentage of approved project costs that ranged from 50 to 80 percent, depending on ability to pay. States varied, however, in the degree to which they considered district wealth. For example, officials in North Carolina reported four funding programs, one of which targeted assistance to poorer districts with critical facility needs. In New York, all construction funding has been provided through one program that considered district wealth in providing a percentage of approved project costs. In addition to ability to pay, other funding prioritization factors that state officials reported using included enrollment growth and facility overcrowding, physical condition of buildings, and whether districts had consolidated. Most states reported providing aid as grants rather than loans. Only 8 of the 40 states reported providing any assistance for school facilities in the form of loans to local districts. Most states reported providing facilities funding through state budget appropriations. A total of 29 of the 40 states reported providing at least a portion of construction funding through state budget appropriations. Another often used source of funding was state bonds. A few states also reported using special revenue sources dedicated to school construction. For example, Wyoming reported using mineral royalties from school-owned lands to support its capital construction grant program. Most states reported providing no assistance for preventive or routine maintenance through their construction funding programs. Officials typically described state programs as providing assistance for the construction and renovation of school buildings. While many states also reported funding major maintenance projects, such as roof replacements, most said they did not provide assistance for routine or preventive maintenance. Forty-four states reported providing technical assistance to LEAs for facilities or reviewing facilities projects for compliance with state requirements. (See app. II, table II.2.) Although technical assistance and compliance review activities tended to be similar among states, the level of involvement varied considerably as did the number of staff devoted to the efforts. As we conducted our study, we also found that agencies other than the SEA had at least some responsibility for school facilities. However, pursuing information about activities in these other agencies was beyond the scope of this study, and we focused mainly on the activities and staffing levels at the SEAs. A total of 44 states reported providing technical assistance to LEAs—specifically, information or guidance on facilities regulations, planning, construction, or maintenance. Technical assistance was typically furnished by phone, through publications and manuals, at meetings between SEA and LEA representatives, or through workshops and formal training. The assistance in some states was limited to answering a few LEA questions and in others it also included guidance on needs assessments and long-range plans; building design; hazardous materials; engineering, legal, and architectural matters; among other subjects. We found considerable variance in the levels of technical assistance provided. Some states provided a limited level of technical assistance. For example, Montana’s SEA reported providing information—but not training—on regulations, requirements, and other facility guidelines. Oregon reported providing guidance only on asbestos removal regulations and processes, including sponsoring a yearly training class. Other SEAs were more involved in technical assistance activities. For example, a Maryland SEA official reported that its facilities staff spent a large portion of their time in the field working with local committees to plan and design school buildings. They conferred with architects on school design; presented training for school board officials, engineers, architects, and school custodial staff; and provided a variety of facilities issues publications to LEAs. A total of 37 states also reported compliance review activities relative to building and fire codes, state education specifications, or other state regulations. Compliance review activities were fairly standard among states, consisting primarily of reviewing project architectural plans to ensure that they conform to regulations and requirements. Over two-thirds of the 50 states reported overseeing compliance with education specifications or other state regulations associated with facilities, while nearly one-third reported reviewing plans for building or fire code compliance. Although states’ compliance review activities were fairly standard, their levels of involvement varied. For example, Ohio officials reported that the facilities unit reviewed architectural plans for conformance with education standards but did little compliance enforcement. In contrast, Connecticut officials reported that the SEA facilities unit reviewed plans for compliance with several codes, including state building, life safety, and health codes, as well as federal health, safety, and accessibility requirements. Approval of the facilities unit was required for a project to receive state aid. Of the 44 states providing technical assistance or compliance review, a total of 28 reported SEA staffs with fewer than six FTE employees involved in facilities-related work—including 12 states with one FTE or fewer (see fig. 2). SEA facilities staffing levels in the 44 states ranged from .02 to 72 FTEs. (See app. II, table II.2.) Officials reported that facilities staff expertise may include finance, education specifications, building codes, and plans checking. Some reported architects, engineers, or attorneys on staff. In many states, SEA officials told us that other state agencies were involved to at least some extent in school facilities activities—in particular, compliance activities. For example, most states reported that the State Fire Marshal had school facilities responsibilities, often related to code compliance or building inspection. Other state agencies frequently mentioned by officials as having facilities responsibilities included departments of health, labor, and environment. In three states—California, Hawaii, and Maryland—major facilities responsibilities were shared among the SEA and other agencies. For example, in California, staff in two divisions of the Department of General Services as well as the SEA played major roles. Finally, in two states—South Dakota and West Virginia—the major school facilities activities were handled outside the SEA. For example, in South Dakota, all facilities responsibility was transferred to the State Fire Marshal’s Office by legislation passed in 1994. Facility staffing levels are changing in some states. Several SEAs reported proposed or enacted reductions in facilities staff or facilities responsibilities. For example, in Maine, since a 1991 recession, facilities unit FTEs have been reduced from three to one professional staff as part of a general reduction in the size of the SEA. More recently, Florida has reduced its facilities unit staffing by 75 percent and New York by 25 percent for fiscal year 1996. On the other hand, two SEAs reported that they hope to increase their facilities units by one or two staff. Fewer states reported collecting and maintaining current information on the condition of school buildings compared with the number of states providing financial or technical assistance and compliance review for facilities. We considered states to collect such data if the information documented the condition of individual schools and was collected or at least updated in the last 5 years. A total of 23 states reported maintaining information on the condition of school buildings (see fig. 3). Of these, 15 states reported collecting facility condition data on a regular, ongoing basis, updating their information annually or every few years. The remaining eight states reported conducting a one-time study of the condition of their facilities sometime in the last 5 years. Seventeen states reported maintaining other types of information on their facilities that was not specifically related to building condition. In many cases this information was an inventory of school buildings, which often included such data as the number of buildings, their age and size, and building use. Other types of facility information that states collected included data on the total appraised value of school facilities and building architectural plans. Nearly all states collecting information on the condition of school buildings reported maintaining other facilities data as well. Ten states reported that they maintained no information on school facilities or did so on an extremely limited basis, such as retaining current application materials and financial records or reports on the general adequacy of facilities resulting from standard school accreditation reviews. For example, in Connecticut, the official we interviewed reported that the state collected only the information and plans necessary for the projects under review at any given time. (For a delineation of the facilities information collected by individual states, see app. II.) For the 23 states collecting some type of data on the condition of facilities, the comprehensiveness of the information and the frequency of data collection varied. Some states reported using professional architects or state-trained staff to conduct assessments of the condition-specific components of the building structure, such as walls and roofs, or building systems, such as plumbing and heating. Often these labor-intensive studies were conducted as one-time efforts or were updated once every several years. Other states reported relying on districts to report an overall rating for the condition of their buildings. For example, in Alabama, districts must complete an annual building inventory survey that includes one item to rate the overall condition of buildings on a four-point scale from “excellent” to “should be razed.” When we asked state officials about any changes they would like to make to their information gathering systems, almost one-third said they would like to collect additional information. Several expressed interest in developing an inventory of their school buildings or updating their present inventories. Many states were also interested in starting to gather building condition information or updating condition information collected earlier. In addition to gathering more data, officials in many states expressed an interest in automating more of the information they collect. For example, officials in several states hoped to make data collection from local districts more interactive using computers. Two state officials expressed interest in computerizing architectural plans. On the other hand, officials in several states believed their current level of data collection was sufficient. In six states that collected relatively little facilities data, officials said they did not want to gather any additional information, and a few said the information they had was adequate for the scope of their state’s program. For example, in Rhode Island, the state aid specialist said that as long as the state program remains locally oriented they require no further data. Officials in a few states said that they would have to increase their staff to collect or analyze more information and did not want to do this. Although local governments have traditionally been responsible for facilities construction, renovation, and major maintenance, most SEAs have established a state presence in school facilities matters using a variety of approaches. However, states’ levels of involvement varied: about one-fourth of them had programs that included ongoing funding assistance, a variety of technical assistance and compliance review activities, and data collection on the condition of facilities; 10 states were involved in one or none of the activities. Further, officials reported differing viewpoints and traditions on state involvement in facilities matters. Such variations in approach and philosophy among states illustrate the lack of consensus on the most appropriate and effective state role. Today, state involvement in school facilities remains in flux. Because the physical condition of school buildings has become a concern in school finance equity litigation, experts expect disparities in facilities to be a continuing and pressing issue. States will likely be looked to for ways to lessen these disparities. State governments, however, face pressure from other rapidly rising budget expenditures—such as health care—that compete for the same limited funds. The Department of Education reviewed a draft of this report and had no comments. In addition, we provided state-specific information to state officials for verification and incorporated their comments in the text as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to appropriate House and Senate Committees and all members, the Secretary of Education, and other interested parties. Please contact me on (202) 512-7014 or my assistant director, Eleanor L. Johnson, on (202) 512-7209 if you or your staff have any questions. Major contributors to this report are listed in appendix III. To determine the extent to which states provided funding and technical assistance and compliance review for school facilities and maintained information on the condition of school buildings, we conducted telephone interviews with state officials responsible for school facilities in all 50 states. In nearly all cases, we spoke with staff at the state education agency (SEA) responsible for school facilities. In a few states, we also spoke with officials located in other state agencies extensively involved in school facilities. Where necessary, for clarification, we conducted follow-up telephone interviews. We supplemented this information with supporting documentation provided by state officials. All data were self-reported by state officials, and we did not verify their accuracy. We conducted our work between October 1994 and September 1995 in accordance with generally accepted government auditing standards. The focus of our study was state fiscal year 1994. Typically, this covered the period from July 1, 1993, to June 30, 1994. We learned of changes in state programs that occurred after this time during follow-up interviews with state officials and included these when they suggested trends in changing levels of state involvement. States’ involvement in providing assistance for school facilities ranged widely (see table II.1). To illustrate, profiles of assistance provided in three states—Georgia, Maine, and Colorado—are presented following table II.1. (continued) Colorado requires that each local education agency (LEA) set aside $202 per pupil of the state and local basic aid funding to be used for long-range capital needs such as new facilities, major renovations, land, school buses, or risk management purposes such as liability insurance or workers compensation. The funding cannot be used for debt service. The Colorado state education agency (SEA) has no staff assigned to facilities activities, and technical assistance is limited to answering a few questions during the year. Colorado does not routinely collect information on facilities; an official told us that measuring the condition of schools is considered a local issue. The Maine School Construction Program provided LEAs with about $43.5 million in state fiscal year 1994 to pay debt service on capital construction bonds through the state’s foundation funding. The amount received is based in part on the assessed valuation per student and on project priority criteria such as overcrowding. A staff of three in the Division of School Business Services spend part of their time overseeing the facilities funding program and providing information and assistance to LEAs throughout the funding and construction processes. The division works with LEAs on compliance with state education program guidelines and coordinates project review and approval among other agencies, such as the State Fire Marshal and the Bureau of General Services. The SEA does not currently gather information about the condition of buildings but hopes to conduct a survey of LEAs to gather descriptive information on their facilities. The Georgia Department of Education provides facilities assistance to LEAs through a system of annual entitlements based on district needs, including enrollment increases. LEAs may permit their entitlements to accrue over time, which allows each school system to undertake significant projects rather than make minor repairs year after year. LEAs must submit to the state a 5-year comprehensive facilities plan validated by an outside survey team and provide from 10 to 25 percent of the project costs. The SEA Facilities Services Section has field consultants who provide assistance to their assigned LEAs and an architect who reviews all architectural project plans for compliance with state requirements. Georgia provided about $151 million to LEAs for facilities in state fiscal year 1994. Levels of compliance review and technical assistance varied widely among states. (See table II.2). Profiles of three states that exemplify this variance—New York, Washington, and Wisconsin—follow table II.2. Full-time equivalents (FTE) A, B, C A, B, C A, B, C A, B, C A, B, C A, B, C A, B, C A, B, C A, B, C A, B, C A, B, C (continued) Full-time equivalents (FTE) A. Technical assistance includes providing information or guidance to LEAs on funding or construction issues using one or more of a variety of activities, including telephone consultations or site visits, attending district meetings, presenting training to district staff or those working on school construction projects, or publishing informational documents for district use. B. Compliance review for building or fire codes includes reviewing architectural plans for conformance with building, mechanical, electrical, or related structural and life/safety codes. C. Compliance review for education specifications or other state regulations includes reviewing architectural plans or other documents for conformance with state education specifications such as for the size and use of school building space. It also includes reviewing documents for conformance with other state requirements, such as the use of women- or minority-owned companies, or wages paid to school construction workers. New York’s SEA staff present workshops and publish newsletter articles on regulations and facilities planning as well as architectural, engineering, and legal issues. They also provide information to about 100 telephone callers per day. Staff review architectural plans for compliance with the building code and education specifications. They assess the need for projects, approve sites, enforce the state environmental review act, determine eligibility for state building aid and petroleum overcharge funds, issue building permits, and approve leases. The SEA oversees a fire inspection program that enforces building and fire codes for existing buildings through annual inspections conducted by LEA-hired inspectors. Staff certify completed projects for occupancy, provide on-call assistance for environmental hazard problems and are implementing a requirement for LEA comprehensive 5-year capital plans. Washington’s SEA school facilities section staff provide information to local school districts on health and safety issues and ensure that state-assisted school construction projects comply with state law. The section provides assistance to school districts and other state and federal agencies by acting as an information clearinghouse. Wisconsin’s SEA staff provide assistance interpreting the building code and health and safety regulations—usually by telephone or sending documents by mail. The staff present occasional on-site workshops, referrals to other agencies, and assistance with LEA facilities plans. Nearly half of the states maintained information on the condition of school facilities. Some collected it on an ongoing basis, while others had done a recent, one-time study. Most states maintained information on facilities other than condition. Only 10 states maintained extremely limited or no information on facilities. Table II.3 describes the extent of facilities information maintained by each state. Maintains no information or only extremely limited information (continued) R. Jerry Aiken, Computer Specialist (Programmer/Analyst) D. Catherine Baltzell, Supervisory Social Science Analyst Sandra L. Baxter, Senior Evaluator Tamara A. Lumpkin, Evaluator Stanley H. Stenersen, Senior Evaluator Virginia A. Vanderlinde, Evaluator Dianne L. Whitman, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the role of states in supporting school facilities improvements, focusing on: (1) state funding and technical assistance to local school districts; and (2) the extent to which states collect information on school building conditions. GAO found that: (1) most states have a role in school facilities construction, renovation, and maintenance, and 13 states have established comprehensive facilities programs; (2) states provided $3.5 billion for school facilities construction during fiscal year 1994; (3) state financial assistance for school facility construction ranged from $6 per student to more than $2,000 per student; (4) the number of staff devoted to providing facilities guidance and oversight varied, with most states having fewer than 6 full-time equivalent staff; (5) twenty-three states collected data on the condition of school buildings in their area, seventeen states collected data on building inventories, and 10 states collected no data on school facilities; and (6) some state officials believe that school facilities matters are primarily a local responsibility.
A comprehensive reassessment of agencies’ roles and responsibilities is central to any congressional and executive branch strategy that seeks to bring about a government that is not only smaller but also more efficient and effective. GPRA provides a legislatively based mechanism for Congress and the executive branch to jointly engage in that reassessment. In crafting GPRA, Congress recognized the vital role that consultations with stakeholders should have in defining agencies’ missions and establishing their goals. Therefore, GPRA requires agencies to consult with Congress and other stakeholders in the preparation of their strategic plans. These consultations are an important opportunity for Congress and the executive branch to work together in reassessing and clarifying the missions of federal agencies and the outcomes of agencies’ programs. Many federal agencies today are the product of years of accumulated responsibilities and roles as new social and economic problems have arisen. While adding the particular roles and responsibilities may have made sense at the time, the cumulative effect has been to create a government in which all too frequently individual agencies lack clear missions and goals and related agencies’ efforts are not complementary. Moreover, legislative mandates may be unclear and Congress, the executive branch, and other stakeholders may not agree on the goals an agency and its programs should be trying to achieve, the strategies for achieving those goals, and the ways to measure their success. For example, we reported that the Environmental Protection Agency (EPA), had not been able to target its resources as efficiently as possible to address the nation’s highest environmental priorities because it did not have an overarching legislative mission and its environmental responsibilities had not been integrated. As a result of these problems, EPA could not ensure that its efforts were directed at addressing the environmental problems that posed the greatest risk to the health of the U.S. population or the environment. To respond to these shortcomings, EPA is beginning to sharpen its mission and goals through its National Environmental Goals Project, a long-range planning and goal-setting initiative that, as part of EPA’s efforts under GPRA, is seeking to develop a set of measurable, stakeholder-validated goals for improving the nation’s environmental quality. The situation at EPA is by no means unique. Our work has shown that the effectiveness of other agencies, such as the Department of Energy and the Economic Development Administration, also has been hampered by the absence of clear missions and strategic goals. and limit the overall effectiveness of the federal effort. For example, the $20 billion appropriated for employment assistance and training activities in fiscal year 1995 covered 163 programs that were spread over 15 agencies. Our work showed that these programs were badly fragmented and in need of a major overhaul. Moreover, in reviewing 62 programs that provided employment assistance and training to the economically disadvantaged, we found that most programs lacked very basic information needed to manage. Fewer than 50 percent of the programs collected data on whether program participants obtained jobs after they received services, and only 26 percent collected data on wages that participants earned. Both houses of Congress in recent months have undertaken actions to address the serious shortcomings in the federal government’s employment assistance and training programs, although agreement has not been reached on the best approach to consolidation. In another example, we identified 8 agencies that are administering 17 different programs assisting rural areas in constructing, expanding, or repairing water and wastewater facilities. These overlapping programs often delayed rural construction projects because of differences in the federal agencies’ timetables for grants and loans. Also, the programs experienced increased project costs because rural governments had to participate in several essentially similar federal grant and loan programs with differing requirements and processes. We found that, because of the number and complexity of programs available, many rural areas needed to use a consultant to apply for and administer federal grants or loans. The examples I have cited today of agencies with unclear missions and other agencies that are duplicating each other’s efforts are not isolated cases. Our work that has looked at agencies’ spending patterns has identified other federal agencies whose missions deserve careful review to ensure against inappropriate duplication of effort. As I noted in an appearance before the Senate Committee on Governmental Affairs last May, in large measure, problems arising from unclear agency missions and goals and overlap and fragmentation among programs can best be solved through an integrated approach to federal efforts. Such an approach looks across the activities of individual programs to the overall goals that the federal government is trying to achieve. The GPRA requirement that agencies consult with Congress in developing their strategic plans presents an important opportunity for congressional committees and the executive branch to work together to address the problem of agencies whose missions are not well-defined, whose goals are unclear or nonexistent, and whose programs are not properly targeted. Such consultations will be helpful to Congress in modifying agencies’ missions, setting better priorities, and restructuring or terminating programs. The agencies’ consultations with Congress on strategic plans will begin in earnest in the coming weeks and months. The Office of Management and Budget’s (OMB) guidance to agencies on GPRA requirements for strategic planning said that agencies would be asked to provide OMB with selected parts of their strategic plans this year. Some departments, such as the Department of the Treasury, are scheduling meetings on their strategic plans with the appropriate authorization, appropriation, and oversight committees. As congressional committees work with agencies on developing their strategic plans, they should ask each agency to clearly articulate its mission and strategic goals and to show how program efforts are linked to the agency’s mission and goals. Making this linkage would help agencies and Congress identify program efforts that may be neither mission-related nor contribute to an agencies’ desired outcomes. It would also help Congress to identify agencies whose efforts are not coordinated. As strategic planning efforts proceed, Congress eventually could ask OMB to identify programs with similar or conflicting goals. As was to be expected during the initial efforts of such a challenging management reform effort, the integration of GPRA into program operations in pilot agencies has been uneven. This integration is important because Congress intended that outcome-oriented strategic plans would serve as the starting points for agencies’ goal-setting and performance measurement efforts. Ultimately, performance information is to be used to inform an array of congressional and executive branch decisions, such as those concerning allocating scarce resources among competing priorities. To help accomplish this integration, GPRA requires that beginning with fiscal year 1999, all agencies are to develop annual performance plans that provide a direct linkage between long-term strategic goals and what program managers are doing on a day-to-day basis to achieve those goals. These plans are to be submitted to OMB with the agencies’ budget submissions and are expected to be useful in formulating the president’s budget. Congress can play a decisive role in the implementation of GPRA by insisting that performance goals and information be used to drive day-to-day activities in the agencies. Consistent congressional interest at authorization, appropriation, budget, and oversight hearings on the status of an agency’s GPRA efforts, performance measures, and uses of performance information to make decisions, will send an unmistakable message to agencies that Congress expects GPRA to be thoroughly implemented. Chairman Clinger and the Committee on Government Reform and Oversight took an important first step last year when they recommended that House committees conduct oversight to help ensure that GPRA and the CFO Act are being aggressively implemented. They also recommended that House committees use the financial and program information required by these acts in overseeing agencies within their jurisdiction. A further important step toward sharpening agencies’ focus on outcomes would be for congressional committees of jurisdiction to hold comprehensive oversight hearings—annually or at least once during each Congress—using a wide range of program and financial information. Agencies’ program performance information that can be generated under GPRA and the audited financial statements that are being developed to comply with the Government Management Reform Act (GMRA) should serve as the basis for these hearings. GMRA expanded to all 24 CFO Act agencies the requirement for the preparation and audit of financial statements for their entire operations, beginning with those for fiscal year 1996. Also, consistent with GMRA, OMB is working with six agencies to pilot the development of consolidated accountability reports. By integrating the separate reporting requirements of GPRA, the CFO Act, and other specified acts, the accountability reports are intended to show the degree to which an agency met its goals, at what cost, and whether the agency was well run. I have endorsed the concept of an integrated accountability report and was pleased to learn that OMB plans to develop guidance, which is to be based on the experiences of the initial six pilots, for other agencies that may wish to produce such reports for fiscal year 1996. I believe that by asking agencies the following or similar questions, Congress will both lay the groundwork for communicating to agencies the importance it places on successful implementation of GPRA and obtain important information on the status of agencies’ GPRA efforts. The experiences of many of the leading states and foreign countries that have implemented management reform efforts similar to GPRA suggest that striving to measure outcomes will be one of the most challenging and time-consuming aspects of GPRA. Nevertheless, measuring outcomes is a critical aspect of GPRA, particularly for informing the decisions of congressional and high-level executive branch decisionmakers as they allocate resources and determine the need for and the efficiency and effectiveness of specific programs. As expected at this stage of GPRA’s implementation, we are finding that many agencies are having difficulty in making the transition to a focus on outcomes. For example, to meet the goals in its current GPRA performance plan, the Small Business Administration (SBA) monitors its activities and records accomplishments largely on the basis of outputs, such as an increased number of Business Information Centers. Such information is important to SBA in managing and tracking its activities. However, to realize the full potential of outcome-oriented management, SBA needs to take the next step of assessing, for example, the difference the additional Centers make, if any, to the success of small businesses. SBA also needs to assess whether the Centers and the services they provide are the most cost-effective way to achieve SBA’s goals. Similarly, the goals in the Occupational Health and Safety Administration’s (OSHA) GPRA performance plan are not being used to set the direction for OSHA and the measurable outcomes it needs to pursue. For example, one of OSHA’s goals is to “focus resources on achieving workplace hazard abatement through strong enforcement and innovative incentive programs.” Focusing resources may help OSHA meet its mission, but this represents a strategy rather than a measurable goal. Officials leading OSHA’s performance measurement efforts recognize that OSHA’s goals are not sufficiently outcome-oriented and that OSHA needs to make significant progress in this area to provide a better link between its efforts and the establishment of safer and healthier workplaces. We also are finding instances where pilot agencies could better ensure that their GPRA performance goals include all of their major mission areas and responsibilities. It is important that agencies supply information on all of their mission areas in order to provide congressional and executive branch decisionmakers with a complete picture of the agency’s overall efforts and effectiveness. For example, the Bureau of Engraving and Printing’s GPRA performance plans contain a goal for the efficient production of stamps and currency. However, these performance plans do not address an area that the Bureau cites as an important part of its mission—security. The Bureau has primary responsibility for designing and printing U.S. currency, which includes incorporating security features into the currency to combat counterfeiting. The importance of security issues has been growing recently because of heightened concern over currency counterfeiting. Foreign counterfeiters especially are becoming very sophisticated and are producing very high-quality counterfeit notes, some of which are more difficult to detect than previous counterfeits. The value of an agency’s performance information arises from the use of that information to improve the efficiency and effectiveness of program efforts. By using performance information, an agency can set more ambitious goals in areas where goals are being met and identify actions needed to meet those goals that have not been achieved. information in cases where goals are not met. In the pilot reports we reviewed, 109 of the 286 annual performance goals, or about 38 percent, were reported as not met. GPRA requires that agencies explain why goals were not met and provide plans and schedules for achieving those goals. However, for the 109 unmet goals we examined, the pilot reports explained the reason the goal was not met in only 41 of these cases. Overall, the pilot reports described actions that pilots were taking to achieve the goal for 27, or fewer than 25 percent, of the unmet goals. Moreover, none of the reports included plans and schedules for achieving unmet goals. Discussions of how performance information is being used are important because GPRA performance reports are to be one of Congress’ major accountability documents. As such, these reports are to help Congress assess agencies’ progress in meeting goals and determine whether planned actions will be sufficient to achieve unmet goals, or, alternatively, whether the goals should be modified. As you are aware, I have long been concerned about the state of the federal government’s basic financial and information management systems and the knowledge, skills, and abilities of the staff responsible for those systems. Simply put, GPRA cannot be fully successful unless and until these systems are able to provide decisionmakers with the program cost and performance information needed to make decisions. Because these financial systems are old and do not meet users’ needs, they have become the single greatest barrier to timely and meaningful financial reporting. Self-assessments by the 24 CFO Act agencies showed that most agency systems are not capable of readily producing annual financial statements and do not comply with current system standards. The CFO Council has designated financial management systems as its number one priority. organizations and said that such training was critical to the success of their reform efforts. We are concerned that most federal agencies have not made progress in developing plans to provide this essential training in the creative and low-cost ways that the current budget environment demands. I fully appreciate that, in this environment, maintaining existing budgets devoted to management systems and training is a formidable challenge. However, continued—and in some cases, augmented—investment in these areas is important to ensure that managers have the information and skills needed to run downsized federal organizations efficiently. In passing GPRA, Congress recognized that, in exchange for shifting the focus of accountability to outcomes, managers must be given the authority and flexibility to achieve those outcomes. GPRA therefore includes provisions to allow agencies to seek relief from certain administrative procedural requirements and controls. Agencies’ efforts to focus on achieving results are leading a number of them to recognize the need to change their core business processes to better support the goals they are trying to achieve. For example, the U.S. Army Corps of Engineers’ Civil Works Directorate, Operation and Maintenance program, changed its core processes by means of several initiatives, including decentralizing its organizational structure and delegating decisionmaking authority to project managers in the field. In exchange for this delegated decisionmaking, managers at the Corps of Engineers increasingly are being held accountable for achieving results. The Corps has estimated that, by changing its core processes, it has saved about $6 million annually including 175 staff years. critical to continuing the momentum needed to ensure the aggressive implementation of GPRA. This concludes my prepared statement. I would be pleased to respond to any questions. GPRA Performance Reports (GAO/GGD-96-66R, Feb. 14, 1996). Office of Management and Budget: Changes Resulting From the OMB 2000 Reorganization (GAO/GGD/AIMD-96-50, Dec. 29, 1995). Transforming the Civil Service: Building The Workforce of The Future, Results Of A GAO-Sponsored Symposium (GAO/GGD-96-35, Dec. 20, 1995). Financial Management: Continued Momentum Essential to Achieve CFO Act Goals (GAO/T-AIMD-96-10, Dec. 14, 1995). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1, 1995). Financial Management: Momentum Must Be Sustained to Achieve the Reform Goals of the Chief Financial Officers Act (GAO/T-AIMD-95-204, July 25, 1995). Managing for Results: Status of the Government Performance and Results Act (GAO/T-GGD-95-193, June 27, 1995). Managing for Results: Critical Actions for Measuring Performance (GAO/T-GGD/AIMD-95-187, June 20, 1995). Managing for Results: The Department of Justice’s Initial Efforts to Implement GPRA (GAO/GGD-95-167FS, June 20, 1995). Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches (GAO/T-AIMD-95-161, June 7, 1995). Government Reorganization: Issues and Principles (GAO/T-GGD/AIMD-95-166, May 17, 1995). Managing for Results: Steps for Strengthening Federal Management (GAO/T-GGD/AIMD-95-158, May 9, 1995). Managing for Results: Experiences Abroad Suggest Insights for Federal Management Reforms (GAO/GGD-95-120, May 2, 1995). Government Reform: Goal-Setting and Performance (GAO/AIMD/GGD-95-130R, Mar. 27, 1995). Block Grants: Characteristics, Experience, and Lessons Learned (GAO/HEHS-95-74, Feb. 9, 1995). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). Managing for Results: State Experiences Provide Insights for Federal Management Reforms (GAO/GGD-95-22, Dec. 21, 1994). Reengineering Organizations: Results of a GAO Symposium (GAO/NSIAD-95-34, Dec. 13, 1994). Management Reform: Implementation of the National Performance Review’s Recommendations (GAO/OCG-95-1, Dec. 5, 1994). Management Reforms: Examples of Public and Private Innovations to Improve Service Delivery (GAO/AIMD/GGD-94-90BR, Feb. 11, 1994). Performance Budgeting: State Experiences and Implications for the Federal Government (GAO/AFMD-93-41, Feb. 17, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed: (1) the Government Performance and Results Act's (GPRA) potential contributions to congressional and executive branch decisionmaking; and (2) Congress' role in implementing GPRA. GAO noted that: (1) more federal agencies are recognizing the benefits of focusing on outcomes rather than activities to improve their programs' efficiency and effectiveness; (2) agencies cannot quickly and easily shift their focus because outcomes can be difficult to define and measure and major changes in services and processes may be required; (3) strong and sustained congressional attention is needed to ensure GPRA success; (4) GPRA provides a mechanism for reassessing agencies' missions and focusing programs while downsizing and increasing efficiency; (5) unclear goals and missions have hampered the targeting of program resources and caused overlaps and duplications; (6) Congress needs to hold periodic comprehensive oversight hearings and to gather information on measuring outcomes and determine how GPRA performance goals and information drive agencies' daily operations, how agencies use performance information to improve their effectiveness, agencies' progress in improving their financial and information systems and staff training and recruitment, and how agencies are aligning their core business processes to support mission-related outcomes.
In 1972, the Congress established the Construction Grants Program to provide grants to help local governments construct wastewater treatment facilities. These federal grants provided most of the funding for these projects; the remainder was provided by the local government constructing the project. In 1987, the Congress began to phase out that program and authorized the creation of state revolving funds (SRF), which provide loans to local governments and others. The states are required to match SRF capitalization grants at a rate of at least one state dollar for every five federal dollars. The states have the option of increasing the amount of SRF funds available to lend by issuing bonds guaranteed by the money in the SRFs. According to a national survey, as of June 30, 1995 (the latest data available), the states collectively had $18.9 billion in their SRF accounts; over one-half of this amount (approximately $11 billion) was provided by federal capitalization grants. (App. I provides additional information on funding sources for the nine SRFs.) For the most part, the Congress gave the states flexibility to develop SRF loan assistance programs that meet their particular needs. However, the states must ensure that the projects funded with loans issued up to the amount of the federal capitalization grants meet two types of federal requirements. The first type of requirement includes those requirements contained in the various statutes that apply generally to federal grant programs. These requirements—also called “cross-cutting” authorities—promote national policy goals, such as equal employment opportunity and participation by minority-owned businesses. The second type of requirement applies various provisions that applied to the Construction Grants Program (known as title II requirements, because that program was authorized by title II of the Federal Water Pollution Control Act Amendments of 1972). These requirements include compliance with the federal prevailing-wage requirement. The title II requirements apply only to those projects wholly or partially built before fiscal year 1995 with funds made directly available by federal capitalization grants. The transfer of federal funds to SRFs begins when the Congress appropriates funds annually to the Environmental Protection Agency (EPA). EPA then allots capitalization grants to the individual states, generally according to percentages specified in the Clean Water Act. To receive its allotment, a state has up to 2 years to apply for its capitalization grant. In order to apply, a state must, among other things, propose a list of potential projects to solve water quality problems and receive public comments on that list. After completing the list and receiving its capitalization grant, a state generally has 2 years to receive payments of the grant amount (via increases in its letter of credit). After each such increase, a state has up to 1 year to enter into binding commitments to fund specific projects. Next, a binding commitment is typically converted into a loan agreement. The overall amount of funds lent by the nine states increased between 1995 and 1996, from $3.3 billion to $4.0 billion. The amount lent by each state also increased. During the same time period, seven states increased their percentage of funds lent, and two states maintained or decreased their percentage of funds lent. As figure 1 shows, all nine states increased the amount of funds they lent between 1995 and 1996. Six states increased their amount by 15 to 29 percent. For example, Pennsylvania increased the amount lent by 17 percent, from $267 million to $311 million. The other three states increased their amount of funds lent by 30 percent or more. The largest change—95 percent—was in Arizona, which increased from $50 million to $99 million. As figure 2 shows, seven of the nine states increased their percentage of funds lent between 1995 and 1996. Three states increased their percentage by 17 percentage points or more. Four other states increased their percentage by 2 to 9 percentage points. Finally, one state’s percentage stayed the same, and another state’s percentage declined by 2 percentage points. Among the nine states, the percentage of funds lent at the end of 1996 ranged from 60 to 99 percent. Specifically, five states lent 80 percent or more of their available funds, another three states lent 70 to 79 percent, and the final state lent 60 percent. (App. II provides details on the amount and percentage of funds lent, by state.) Officials in eight of the nine states cited one or more factors at the federal level as affecting the amounts and percentages of funds they lent. In seven states, officials said that uncertainty about the reauthorization of the SRF program discouraged some potential borrowers. Also, in seven states, officials cited a concern about compliance with federal requirements, including possible increases in project costs because of a federal prevailing-wage requirement. Finally, in three states, officials identified other reasons, such as federal restrictions on the use of SRF funds. Officials in seven of the nine states said that the lack of reauthorization of the Clean Water Act limited their success in lending funds. Among other things, the lack of reauthorization made it difficult to assure the communities applying for loans that SRF funds would be available to finance their projects and created uncertainty among communities about the terms of their loans. Officials from the seven states generally agreed that the amount and timing of federal funding became more uncertain after the SRF program’s authorization expired at the end of September 1994. These officials said that prior to 1994, they used the amounts in the authorizing legislation to help determine how much money they would have to lend each year. According to these officials, these amounts also helped reassure the communities that federal funding would be available for projects. These officials said that the uncertainty created by the lack of reauthorization made it difficult for the states to schedule projects and assure the communities applying for loans that construction money would be available when needed. In addition, Pennsylvania officials said that the lack of reauthorization caused some communities to delay accepting SRF loans because they hoped for more favorable loan terms after the act was reauthorized. Specifically, the Congress has considered a proposal to extend the maximum term for an SRF loan, in certain cases, from 20 years to as much as 40 years and to provide lower interest rates. The state officials said that the communities were interested in both longer repayment periods and lower interest rates. According to a Pennsylvania official, several communities in the state had a loan approved by the state but had not formally accepted the loan. In three cases, local officials told us that they were delaying further action pending the act’s reauthorization; the total dollar value of such loans was about $15 million. The Pennsylvania official told us that small, low-income communities in particular would benefit from the proposal to lengthen the repayment period. For example, in March 1995 Pennsylvania approved a $3 million loan for Burrell Township, which has approximately 3,000 people. However, as of October 1996, the community had not accepted the loan on the chance that a reauthorized act would provide for a longer loan term and thus lower annual repayments. Officials in seven of the nine states said that compliance with the federal requirements made financing projects with SRF funds less attractive and, in some cases, caused communities to turn down SRF loans. In particular, five states raised concerns that a federal prevailing-wage requirement could make SRF-financed projects more expensive to construct than projects constructed with other funds. While the title II requirements—which include the federal prevailing-wage requirement—ceased to apply to new projects after October 1, 1994, state officials said they were concerned that these requirements would be reinstated in the reauthorization act. For example, an Arizona official said that the prevailing-wage requirement could inflate a project’s costs from 5 to 25 percent. A Louisiana official said that the community of East Baton Rouge Parish withdrew its 1990 SRF loan application for a project to serve about 120,000 people when it discovered that the prevailing-wage requirement would increase the cost of labor for the project by more than $1.1 million—31 percent. Louisiana officials said that before the prevailing-wage requirement expired, the state had experienced difficulties in making loans largely because local officials perceived the requirement as increasing the costs of projects. The officials said that Louisiana’s lending rate increased in part because the wage requirement expired. The state’s lending rate was 44 percent at the end of 1994, before the requirement expired; 62 percent at the end of 1995; and 79 percent at the end of 1996. EPA officials said they were aware that many states had a concern about the prevailing-wage requirement. They noted, however, that the requirement expired at the end of September 1994 and that the continued application of the requirement would be a state’s management decision. They also noted that, even before the requirement expired, it applied only to projects funded with federal capitalization grants (as opposed to projects funded solely with state matching or borrowed funds, for example). Moreover, they noted that some states have chosen to continue requiring projects to comply with the requirement, even though they are no longer required to do so; however, they said, both Arizona and Louisiana no longer apply the requirement to the projects they fund. Officials from three states identified other factors at the federal level that constrained lending. These included the awarding of federal funds directly for selected communities and federal restrictions on the use of SRF funds. Maryland and Pennsylvania officials said that the earmarking of federal funds—not from the SRF program—for specific communities raised the expectation in other communities that if they waited long enough, they might also receive funds directly. This expectation reduced these communities’ incentive to apply for an SRF loan. For example, a Maryland official said that state SRF lending was limited by a congressional decision to provide federal funds directly for a project in Baltimore, which SRF officials had expected to finance. He said that the City of Baltimore turned down the SRF loan because it received $80 million in federal grant funds for the project in 1993 and 1994. The state official said that it took time to find other communities to borrow the money that was originally set aside for the Baltimore project. The state increased its percentage of funds lent from 61 percent at the end of 1995 to 70 percent at the end of 1996. Officials from Missouri said that certain federal restrictions on the use of SRF funds limit the amount of loans they can make. For example, a state official cited restrictions on financing the costs of acquiring land. Under the Clean Water Act, SRF loans cannot be made to purchase land unless the land itself is an integral part of the waste treatment processes. Thus, wetlands used to filter wastewater as part of the treatment process are an eligible expense under the act. However, other lands, such as the land upon which a treatment plant would be built, are not eligible. According to the official, because purchasing land for a wastewater treatment facility represents a large portion of the facility’s cost but is ineligible for SRF financing, some communities are discouraged from seeking SRF loans. In Pennsylvania and Arizona, the amount of funds lent was limited by decisions on how to manage the loan fund. These decisions related to how to use SRF funds in Pennsylvania and how to publicize the program in Arizona. Pennsylvania established a state-funded program, independent of the SRF, in March 1988 to help communities finance wastewater and other projects. In the early years of the SRF program, Pennsylvania officials decided to finance about $248 million in wastewater projects with these state funds rather than wait for SRF funding to become available, according to state officials. Also according to these officials, the state decided to fund these projects as soon as possible with state funds to reduce public health risks. For example, about $30 million was awarded to the City of Johnstown to upgrade an existing treatment plant and thereby prevent raw sewage overflows and inadequately treated wastewater from being discharged into surface waters. According to a state official, Pennsylvania’s percentage of funds lent would have been higher if the state had chosen to fund these $248 million in projects with SRF funds. In that case, he said, Pennsylvania’s total amount of funds lent through the end of 1996 would have been $558 million, instead of $310 million, and the state would have lent all available funds, instead of 60 percent of these funds. Likewise, in Arizona, the state’s decisions limited the amount of funds lent. According to a state official, efforts to inform local government officials about the SRF program and interest them in participating were not effective in the program’s early years. This difficulty was compounded by restrictive provisions of state law that further limited the amount of SRF funds lent.The state official said that the outreach effort was refocused in 1995. He also noted that the approval of changes in state laws in 1995 and 1996 helped create a more positive atmosphere for outreach, even before the changes took effect. Arizona’s percentage of funds lent was 55 percent at the end of 1995 and 81 percent at the end of 1996. We provided copies of a draft of this report to EPA for its review and comment. On December 11, 1996, we met with EPA officials, including the Chief of the State Revolving Fund Branch in the Office of Wastewater Management, who noted that the report was generally accurate and well researched. In addition to suggesting clarifications in certain places, which we have incorporated where appropriate, EPA asked that we make it clear that the prevailing-wage requirement expired at the end of September 1994 and that any continued application would result from the states’ decisions to retain the requirement. We have added language in the report to clarify this point. Subsequent to our meeting, EPA provided us with written comments on this report, which are reproduced in appendix IV. We used a questionnaire and follow-up discussions to collect information on SRF activities and finances from program officials from the nine states. We selected these states to provide diversity in terms of SRF program size and complexity and other factors, such as geography. However, the conditions in these states are not necessarily representative of the conditions in all 51 SRFs. We also interviewed EPA headquarters and regional officials who are responsible for the SRF program. We did not attempt to independently verify the information collected from EPA or the states. Appendix III provides additional information on how we calculated the states’ percentages of funds lent. We conducted our review from March through December 1996 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution of the report until 30 days after the date of this letter. At that time, we will send copies of the report to the appropriate congressional committees and the Administrator of EPA. We will also make copies available to others upon request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix V. Under the Clean Water State Revolving Fund (SRF) Program, the states use funds from six primary sources to make loans for wastewater treatment and related projects. These are state matching funds, borrowed funds, unused funds from the Construction Grants Program, repayments of loans, and earnings on invested funds. All nine states received federal grants and provided state matching funds. These two sources generally accounted for most of the money in the nine states’ revolving funds. Four of the nine states borrowed money for their revolving funds. Five states transferred unused funds from the old Construction Grants Program. All nine states received some loan repayments. Finally, eight states had investment earnings on loan repayments. Table I.1 shows the amount and sources of funding for the nine states we reviewed through each state’s fiscal year 1996. The amount of funds lent increased overall in every state from 1995 to 1996, as shown in the table below. In addition, the percentage of funds lent generally stayed the same or increased during that period. (App. III explains the basis for GAO’s calculation of the percentage of funds lent by state.) Amount of funds lent (thousands of dollars) To determine the percentage of funds lent by each state as of the end of 1995 and 1996, we divided the total amount of funds lent by the total funds available to lend, both as of the end of the year. We defined the total funds available as including the following six components: federal SRF grants, state matching funds, funds obtained through leveraging, transfers of unused funds from the Construction Grants Program, loan repayments, and investment earnings. We obtained information on loans made and funds available from each state through a questionnaire and follow-up contacts. In addition, we compared the states’ data on the amount of federal SRF grants with the data we obtained from the Environmental Protection Agency (EPA). Our methodology was based on the approach used by the Ohio Water Development Authority in conducting annual SRF surveys during 1992 through 1995. In addition, we discussed our methodology with officials from EPA, the Ohio authority, and the nine states, who generally agreed with our approach. However, state officials raised two concerns about this methodology. First, a Missouri official suggested that loan repayments should not be counted as part of available funds because they do not represent “new” money; rather, repayments represent a recouping of funds previously lent. He said that including repayments would result in double counting and thus overstate the amount of funds the states had available. We chose to include repayments because of the revolving nature of the state funds. Just as any loans made from repayments would be included in the total of funds lent, any repayments need to be included in funds available to provide a complete and consistent accounting of the funds available. If the repayments were excluded from the total amounts of funds available to lend, Missouri’s percentage would be 91 percent; according to our methodology, Missouri’s percentage was 80 percent. Second, an Arizona official contended that we should not have counted the state’s full federal grants as being available to lend. The state did not accept its full federal grants for 2 years. According to his calculation, if the percentage of funds lent were based on the amount that Arizona actually received (rather than the amount it could have received), the state’s percentage of funds lent would have been 99 percent in 1995, rather than 55 percent. In our calculations, we used the full amount of federal grants that were available to the state because the state’s decisions resulted in Arizona’s not accepting its full federal grants. Richard P. Johnson, Attorney-Adviser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed selected states' use of their revolving funds, focusing on the: (1) amount of funds lent and the percentage of available funds lent, as of the end of each state's fiscal year (FY) 1996; and (2) factors at the federal and state levels that constrained the amount and percentage of funds lent. GAO found that: (1) the nine states GAO surveyed increased the total amount of funds they lent from $3.3 billion in 1995 to $4.0 billion in 1996; (2) six states achieved an increase of between 15 and 29 percent, and the other three states achieved an increase of 30 percent or more; (3) seven of the nine states increased the percentage of available funds they lent, and of these seven, three states increased this proportion by 17 percentage points or more; (4) the percentage of funds lent as of the end of 1996 varied substantially among the nine states; (5) five states had lent 80 percent or more of their available funds, three states had lent between 70 and 79 percent, and one state had lent 60 percent; (6) in eight of the nine states, officials identified the expiration of the authorizing legislation, as well as federal requirements, as affecting the amount and percentage of funds lent; (7) officials in seven states said that other federal requirements, such as a prevailing-wage provision, discouraged some communities from seeking loans; (8) in two states, officials said that the decisions made by the state programs constrained lending; (9) program managers in one state decided to finance certain wastewater projects from state funds rather than from the revolving fund, thereby limiting both the amount and the percentage of funds lent from the revolving fund; and (10) in the other state, efforts to publicize the program to local officials were not effective in the early years of the program.
When fully implemented, SMS provides a continuous approach to managing safety risk, which FAA expects will improve aviation safety. SMS is not an additional safety program that is distinct from existing activities that accomplish an entity’s safety mission, such as quality management, quality assurance, or similar activities. Rather, SMS provides a set of decision-making processes and procedures to plan, organize, direct, and control business activities in a manner that enhances safety and ensures compliance with regulatory standards. According to FAA, the overarching goal of SMS is to improve safety by helping ensure that the outcomes of any management or system activity incorporate informed, risk-based decision making. SMS consists of four key components: (1) safety policy, (2) safety risk management, (3) safety assurance, and (4) safety promotion. (See fig.1.) Together, these four components are intended to provide a systematic approach to achieving acceptable risk levels. FAA provides its personnel with guidance on the principles underpinning these components in its official orders and other internal FAA guidance. To the industry, FAA currently provides SMS guidance via advisory circulars, an SMS newsletter, focus groups, and a dedicated page for the SMS program office on the FAA website. FAA is undertaking the transition to SMS in coordination with the international aviation community, working with ICAO to adopt applicable global standards for safety management. ICAO first mandated SMS worldwide for air traffic service providers in 2001. ICAO later specified that member states should mandate SMS implementation for airports, air carriers, and others by 2009. FAA began SMS implementation in 2005, but FAA officials informed ICAO that the agency and industry would not be able to meet the 2009 deadline. The United States filed a “difference” with ICAO—indicating that it does not yet completely comply with the standard—with the understanding that implementation is under way and that FAA is in the midst of a rulemaking to require SMS for commercial air carriers and certificated airports. There have been other actions within the United States to encourage SMS implementation. For instance, in 2007, NTSB recommended that FAA require all commercial air carriers to establish an SMS and in 2011 added SMS for all modes of transportation to its Most Wanted List. NTSB identified SMS as one of the most critical changes needed to reduce the number of accidents and save lives. FAA’s implementation of SMS will affect how the agency oversees the aviation industry. Historically, FAA oversight of airlines, airports, and other regulated entities has involved oversight of such things as operations and maintenance. Once SMS requirements for industry are in place, FAA will continue this oversight, but will also apply SMS principles to its processes for oversight. Specifically, the agency will provide oversight of the safety management systems of service providers, such as commercial air carriers and certificated airports, to help ensure that they are managing safety within their operations through SMS. Internally, FAA has directed the Air Traffic Organization, Airports Organization, Aviation Safety Organization (Aviation Safety), the Office of Commercial Space Transportation, the Next Generation Air Transportation System (NextGen) Office, and the Office of Security and Hazardous Materials Safety to implement SMS. Within Aviation Safety, the Aircraft Certification Service (Aircraft Certification), and the Flight Standards Service (Flight Standards), among others, are also implementing SMS. Oversight of the aviation industry’s implementation of SMS will be conducted by FAA’s inspector workforce, some of whom have been involved in SMS pilot programs. The inspector workforce includes approximately 4,000 Flight Standards aviation safety inspectors around the country who oversee certificate holders, including commercial air carriers and repair stations, among others. Approximately 950 aircraft certification engineers and inspectors are responsible for overseeing firms that design and manufacture aircraft, aircraft engines, propellers, and other parts and equipment. The nation’s certificated airports are overseen by approximately 40 airport certification inspectors assigned to regional offices. In 2009, FAA issued an Advance Notice of Proposed Rulemaking (ANPRM) to solicit comments on establishing a regulatory framework to require SMS for various sectors of the aviation industry, including commercial air carriers, repair stations, and design and manufacturing firms. FAA received public comments on the ANPRM but withdrew it following a 2010 congressional mandate for FAA to issue a rule requiring air carriers to implement SMS. FAA subsequently issued a Notice of Proposed Rulemaking (NPRM) in November 2010 that would require SMS for commercial air carriers. FAA simultaneously developed a proposed rule for certificated airports and issued an NPRM in October 2010 that would also require SMS at those facilities. The status of rulemaking efforts for repair stations and design and manufacturing firms are discussed below. Five FAA organizations are proceeding with SMS implementation. A sixth organization, Air Traffic, completed implementation in 2010. Figure 2 shows the FAA organizations that are implementing SMS as well as industry segments included in this report that these selected organizations oversee. The Air Traffic Organization completed SMS implementation in 2010 and now uses SMS-based processes to identify hazards, enact mitigations, and assess the extent to which the mitigations are working. Since we last reported on their progress in 2012, the other organizations—Aviation Safety, the Airports Organization, the NextGen Office, and the Office of Commercial Space Transportation—have continued SMS implementation by developing SMS guidance and establishing internal SMS procedures, and an additional organization, the Office of Security and Hazardous Materials Safety has begun SMS implementation. Aviation Safety—including two of its components, Flight Standards and Aircraft Certification—are developing or updating SMS guidance and plans, according to FAA officials. For example, Flight Standards is developing for an SMS-based oversight system for commercial air carriers. The Airports Organization established procedures for conducting safety risk management for certain airport actions that require FAA approval. At the time of our last report, the NextGen Office and the Office of Commercial Space Transportation were implementing SMS in line with FAA’s strategic goal in its 2009-2013 Flight Plan to implement SMS policy in all appropriate FAA organizations. In May 2013, FAA issued a revised order that directed these two organizations to implement SMS. In addition, an FAA order directed another organization, the Office of Security and Hazardous Materials Safety, to implement SMS. That organization is updating its inspections and investigations manual to be more consistent with SMS principles and is coordinating with other FAA organizations’ SMS efforts to identify hazards in the transport of hazardous materials. Voluntary implementation of SMS varies by industry segment, while FAA continues rulemaking to require SMS implementation by commercial air carriers and certificated airports. Beginning in 2007, FAA initiated various pilot projects for commercial air carriers, certificated airports, repair stations, and design and manufacturing firms, among others, to encourage voluntary SMS implementation and to guide rulemaking activities. The pilot projects varied among FAA organizations. The Airports Organization and Aircraft Certification pilot projects were conducted for fixed periods of time, while the Flight Standards pilot project, which began in 2007, is ongoing. Flight Standards officials said that the pilot project for commercial air carriers would end after the final rule is published and that they hope to transition the pilot project for the remaining certificate holders (e.g., repair stations) into a more formal program. Participant selection for the pilots also varied. FAA selected the participants for the Aircraft Certification pilot for design and manufacturing firms, while the Flight Standards and Airports Organization pilot project participants volunteered to participate. The pilot projects are more fully described in forthcoming sections of this report. While most commercial air carriers—the first industry segment for which SMS implementation will be mandatory—are moving forward with implementation, a small proportion of certificated airports, repair stations, and design and manufacturing firms have begun SMS implementation. For example, less than 1 percent of the more than 4,700 repair stations are currently implementing SMS. These three industry segments, according to FAA officials and industry stakeholders, may be waiting for additional guidance or rules, which may not be finalized for a number of years, if the time frames for finalizing the air carrier and airport SMS rules are any indication. Seventy-seven of the 83 U.S. commercial air carriers are at varying levels of SMS implementation as FAA nears completion of the final SMS rule. Flight Standards developed four SMS implementation levels to benchmark the progress of commercial air carriers (see table 1 below for additional details on the Flight Standards SMS implementation levels). In general, Flight Standards officials work with commercial air carriers to determine their SMS implementation level. Flight Standards reports the SMS level for each participating stakeholder in its monthly SMS newsletter. According to the March 2014 SMS newsletter, of the 77 commercial air carriers implementing SMS, the vast majority of them (58) were at SMS level one or two, 10 were at level three, and 9 were at the final continuous improvement level. The Airline Safety and Federal Aviation Administration Extension Act of 2010 stipulated that FAA issue a final SMS rule for commercial air carriers. In November of 2010, FAA published an NPRM that would require all commercial air carriers to implement an SMS that meets the requirements of the new regulation and is acceptable to FAA within 3 years of the effective date of the final rule. According to FAA, it did not meet the statutory deadline of August 1, 2012, to issue the final rule due to the lengthy internal DOT review process and complexity of the required benefit-cost analysis. The DOT Significant Rulemaking Report for June 2014 indicated that FAA anticipates publishing the final rule in September 2014. (See fig. 3.) Commercial air carriers that have already begun to implement SMS must ensure that their efforts comply with the requirements of the final rule. While FAA’s rulemaking to require SMS at all 545 certificated airports has been delayed, FAA officials estimated that 9 of the nation’s largest certificated airports were voluntarily implementing SMS as of April 2014. Between 2008 and 2012, the Airports Organization conducted three voluntary SMS pilot studies to provide airports opportunities to gain knowledge about SMS and provide feedback to FAA in its rulemaking efforts. The pilot studies also allowed certificated airports to share their SMS implementation practices with other airports. Thirty-one certificated airports of various sizes participated in at least one of the pilot studies. Twenty-one of the 22 airports in the first pilot study and 1 airport that did not participate in any of the pilot studies received a grant to develop an SMS plan. To further encourage SMS implementation, in August 2013, the Airports Organization published guidance that provides some certificated airports access to federal funds for certain SMS activities, including making SMS management software eligible for funding through the Airport Improvement Program. FAA officials indicated that another goal of the guidance was to ensure that airports would be aware of the basic elements required for SMS implementation plans and manuals. FAA began the SMS rulemaking process for certificated airports in July 2008 and in October 2010 published an NPRM to require that all certificated airports develop and maintain an SMS that is approved by FAA. According to FAA officials, the nature of the comments received on the NPRM led them to significantly modify the proposed rule, and provide another period for public comment. As a result, in December 2012, FAA announced that it would amend the next scheduled step in the rulemaking process and issue a supplemental notice of proposed rulemaking (SNPRM), which was originally scheduled for publication in December 2013, rather than publish a final rule. FAA is considering changes in the proposed rule’s applicability and to some proposed requirements, including SMS implementation options for various sizes of certificated airports. FAA has not yet issued the SNPRM, but in June 2014 FAA estimated that the SNPRM would be published for public comment in October 2014. With FAA’s decision to issue an SNPRM, it is likely that required SMS implementation for certificated airports will not occur for some time. Figure 4 summarizes key dates related to SMS rulemaking for certificated airports, as indicated in the DOT Significant Rulemaking Report for June 2014. Few repair stations are currently implementing SMS, and FAA has not yet determined if it will conduct rulemaking to require SMS at repair stations. As of March 2014, just 15 repair stations (of more than 4,700) were implementing SMS through the Flight Standards SMS pilot project, according to an FAA newsletter. As it does with air carriers, Flight Standards uses the four SMS implementation levels to benchmark and determine the progress of repair station SMS implementation. Thirteen were at SMS level one or two. Our interviews with 5 repair stations that participated in the pilot program suggested that there may be fewer than 14 repair stations currently implementing SMS because representatives we interviewed from 4 of the repair stations indicated that they had either ceased or never actually initiated their SMS efforts. Among the reasons these repair station representatives cited for not moving forward included a lack of FAA support. For example, representatives from one repair station told us that FAA did not respond to their questions on SMS policy changes and did not provide feedback on the gap analysis they performed. FAA officials indicated that there are reasons that more repair stations may want to implement SMS. Flight Standards officials said that once the commercial air carrier SMS rule is finalized, repair stations might want to implement their own SMS so that it could be recognized by the commercial air carriers they may contract with. A senior FAA official noted that because the commercial air carrier SMS rule will include identifying risks posed by repair stations, it would have an impact on repair station operations. Additionally, as other countries complete SMS rulemaking, some repair stations may want to implement SMS for international business reasons. Several design and manufacturing firms were implementing SMS as of April 2014, while an FAA rulemaking committee studies SMS incorporation into the design and manufacturing environment of approximately 3,000 firms. Aircraft Certification conducted an SMS pilot project for design and manufacturing firms from 2010 to 2012 with 11 participants. The participants were selected to represent the diversity of the industry in terms of size and types of products designed and/or manufactured. According to an FAA official, approximately 6 pilot project participants were continuing SMS implementation as of April 2014. In our discussions with several design and manufacturing firms that participated in the pilot, we found that 2—Honeywell and Boeing—were planning to complete SMS implementation in 2015. Honeywell representatives indicated that they expect to complete SMS implementation by mid-2015. Boeing representatives said their company was implementing an SMS as defined to ICAO standards and also expect to complete SMS implementation by mid-2015. In October 2012, FAA chartered a 2-year aviation rulemaking committee (ARC) to, among other things, study SMS incorporation into the design and manufacturing firm environment. According to a January 2014 FAA plan, once the committee has completed its work, FAA will begin a rulemaking process that will result in a change to the current rule for certificating design and manufacturing firms. The plan anticipates the release of an NPRM by January 2016 and publication of a revised final rule by June 2017 to require, among other things, design and manufacturing firms to implement SMS. However, an FAA official stated in April 2014 that FAA had not yet determined when an SMS requirement might be proposed for design and manufacturing firms. FAA officials stated that a primary challenge in developing SMS rules for commercial air carriers and certificated airports is developing the benefit- cost analyses. FAA officials explained that estimating the expected benefits for SMS rules for air carriers and airports relies on analysis of the economic cost of past accidents that might not have occurred had SMS been in place. However, determining which past accidents might not have occurred based on SMS implementation either by air carriers or airports or both is complex and somewhat subjective, and this determination can be affected by other safety improvements that have occurred in recent years or that are being proposed. For example, if subsequent to a particular accident, some other safety improvements have been implemented that would have reduced its likelihood, only a portion of the value of the damage and harm related to that accident can be included in the current benefit calculation for future SMS since the likelihood of the same accident occurring has already been somewhat mitigated. Additionally, an airport industry trade organization expressed concerns in its public comments on the airport NPRM that the proposed rule incorrectly assumed that certain accidents would be mitigated by an airport SMS because those occurrences happened on airport grounds but in areas controlled by air carriers. The trade organization reviewed 53 of the 89 accidents included in the FAA analysis. In the trade organization’s view, 37 of these 53 accidents had little or limited connection to safety actions that an airport might take within its SMS. The likelihood of the accident or incident occurring could therefore be mitigated by the air carrier’s SMS implementation. Some accidents might also be avoided based on joint implementation of SMS systems by both the air carriers and airports. In such cases, expected benefits have to be apportioned across the benefit-cost analyses of the two separate SMS systems—that is, the SMS for air carriers and the SMS for airports—otherwise expected benefits would be double counted. In addition, a number of comments on the proposed rules for both the air carrier and airports questioned the estimated costs imposed on aviation stakeholders related to the implementation of the SMS systems. FAA, as part of its rulemaking process, revisited the benefit-cost analysis after reviewing public comments on the proposed rules. Another challenge FAA faces in implementing SMS rules is planning for how data collected for proactive safety efforts will be used. Over the years, we have identified numerous challenges related to FAA data and made a number of recommendations, some of which FAA has addressed, to improve the use of data that FAA collects. One recommendation of particular significance to SMS implementation, however, has yet to be fully implemented. In 2010, we recommended that FAA develop a comprehensive plan that addresses how data fits into FAA’s implementation of a proactive approach to safety oversight. We indicated that the plan should fully describe the relevant data challenges, analytical approaches to be used, staffing requirements, and the efforts to address them. Because SMS relies on data for maximum benefit, we concluded that a comprehensive data plan by FAA would help expand its capability to improve aviation oversight. DOT concurred with this recommendation, and although FAA has taken actions such as implementing data quality controls consistent with our standards for data quality, it has not yet completed the recommended plan. For example, it has not outlined how it will analyze or use data gathered under SMS, assessed the staffing requirements it will need to conduct the data analysis, and indicated a timeline to complete a plan. GAO believes this recommendation remains valid. SMS implementation challenges have been identified during FAA’s SMS pilot projects and the rulemaking processes for commercial air carriers and certificated airports. We also identified some of these challenges in our prior SMS work. We selected five frequently cited challenges to discuss with aviation stakeholders: data sharing and protection, resource constraints, implementation costs, FAA oversight, and collaboration with other aviation stakeholders. Our discussions with 20 selected aviation stakeholders that participated in at least one FAA SMS pilot project confirmed that these challenges still exist for some stakeholders. Based on these discussions, we found that a majority of the selected stakeholders identified uncertainties about FAA’s oversight plans (17 stakeholders); uncertainties about SMS implementation costs (14 stakeholders); and data protection (16 stakeholders) as challenges. In addition, we found that uncertainty about the FAA’s final rule requirements raised some additional concerns with the stakeholders we interviewed, and half of the stakeholders said that of the four SMS components, the safety risk management is the most challenging to implement because of the time and analyses required to identify and mitigate risks. Seventeen stakeholders identified their uncertainty about FAA’s plans and preparation for oversight of industry’s SMS implementation as a challenge, with 12 identifying it as a great or very great challenge. Specifically, they reported a lack of information about FAA’s intended oversight plans, training for FAA inspectors responsible for oversight, and the potential for inconsistent regulatory interpretation because FAA has not shared that information with stakeholders. Just as the move to SMS will change how stakeholders manage risk, FAA’s oversight approach will also need to undergo some changes. (See table 2.) According to FAA, while its inspectors will continue to check for compliance with safety regulations and retain enforcement authority, oversight will also include reviewing the SMS guidance and records and ensuring that processes for identifying and mitigating risks are being carried out. Internal control standards, an FAA rulemaking committee, and the ICAO Safety Management Manual all state the need to establish plans to ensure that goals and objectives are met. Additionally, we have previously found that communicating internal control efforts on a timely basis to internal and external stakeholders, such as FAA inspectors and aviation industry stakeholders, helps ensure that effective oversight can take place. A 2009 SMS aviation rulemaking committee recommended that FAA ensure that sufficient planning, policy and guidance, and workforce training are in place prior to SMS implementation to accommodate efficient, timely, objective and consistent oversight. Internal control standards also state that personnel should be provided with the right training and tools to perform their duties, and the ICAO Safety Management Manual notes the significance of training to successful SMS implementation and indicates that priority for training must be given to personnel involved in implementation or oversight of SMS. While acknowledging that its oversight role will be changing, FAA has not yet completed plans for overseeing industry’s implementation of SMS, including plans to develop guidance and train inspectors about their SMS oversight functions and responsibilities. Nine of the aviation industry stakeholders and three industry trade groups we interviewed expressed concerns that FAA inspectors will not be prepared to oversee industry implementation of SMS. These concerns related primarily to a lack of SMS training for FAA inspectors. FAA officials told us that they plan to finalize SMS rules before addressing oversight issues. Based on the NPRM, commercial air carriers would have 3 years after the effective date of the final rule to comply with SMS implementation requirements. When the final rule for commercial air carriers becomes effective, FAA inspectors will assume oversight responsibility to ensure that commercial air carriers comply with the final rule. Included with this responsibility is approving commercial air carriers’ SMS plans. According to Flight Standards data, as of November 2013, over 2,700 of approximately 3,900 of their inspectors had completed initial training in SMS activities. This includes inspectors who oversee commercial air carriers. However, even though the publication of the final SMS rule for air carriers is expected before the end of the year—currently planned for September 2014—Flight Standards has not yet established guidance for inspectors for overseeing commercial air carriers’ implementation of SMS or updated its inspector training program to incorporate such guidance. In addition, even though a large number of inspectors have received initial SMS training, stakeholders remain concerned about inspectors’ knowledge of SMS as well as the potential for inconsistent interpretation of SMS requirements. At the same time, the 77 commercial air carriers that have been voluntarily implementing SMS will need to review their SMS to ensure compliance with the final rule and may request FAA guidance during that process to determine whether and what modifications need to be made. Given that most commercial air carriers already have started implementing SMS, the demand for inspector guidance or assistance may come quickly. For example, inspectors might be asked to review a commercial air carrier’s SMS to ensure that all required components are included. Without taking proactive preparations for oversight before requiring SMS, FAA could find itself with oversight responsibilities for which it is not fully prepared. Consequently, the intended benefits of implementing this proactive approach to managing risk could be limited. For other industry segments, notably airports and design and manufacturing firms, that are likely to be next to face an SMS requirement, FAA has taken initial steps to train inspectors working in those areas. In 2008, the Airports Organization provided 3 days of basic SMS instruction to airport safety inspectors during the organization’s annual recurrent training. Officials in the Airports Organization acknowledged that, should FAA finalize an SMS rule for certificated airports, up-to-date training will need to be developed and delivered to the approximately 40 airport safety inspectors, including training in detailed inspection procedures for oversight of an airport’s SMS. Likewise, Aircraft Certification plans for some of its staff to review the Flight Standards initial SMS training course to determine what type of SMS training is necessary for some of its estimated 950 aircraft certification engineers and inspectors. In addition to training, six aviation stakeholders we interviewed expressed concerns about potentially inconsistent regulatory and other interpretations by FAA inspectors. This concern was identified by smaller stakeholders we interviewed as well as those that hold multiple certificates overseen by different FAA organizations. For example, one stakeholder we interviewed indicated that the company holds certificates for multiple facilities in multiple FAA districts, and is concerned that inspectors from different districts may have different interpretations of the SMS rules once they are implemented. We found in 2010 that ensuring consistency in regulatory interpretations has been a long-standing issue for FAA. This was also identified in the SMS ARC final report as an issue that FAA should address prior to SMS implementation. In 2012, FAA formed another aviation rulemaking committee to address consistency in regulatory interpretation by Flight Standards and Aircraft Certification. The committee made six recommendations to improve the consistency of regulatory interpretation and improve communication with industry. In July 2013, FAA provided a preliminary implementation plan to address the recommendations and indicated a detailed implementation plan with milestones would be developed, but provided no time frames for when that detailed plan would be released. Similarly, two stakeholders that operate both domestically and overseas expressed concerns about whether FAA’s SMS regulations will be consistent with the ICAO framework that the regulations are designed to meet and harmonized with the SMS requirements of other countries. These stakeholders are concerned that potential inconsistencies could pose problems if different standards exist in different countries. For example, one repair station with overseas operations we spoke to said that any SMS they may be required to implement to meet FAA regulations must also comply with the SMS requirements in other countries where this repair station operates. The need for international acceptance of a service provider’s SMS was also identified as an issue for FAA to address in the SMS ARC’s final report in 2010. Aviation stakeholders we interviewed largely had not conducted cost assessments of their SMS efforts (only 5 of 20 had done so), and the difficulty in assessing these costs could present a challenge to those considering voluntary implementation of SMS. Fourteen of the 20 stakeholders told us that identifying and assessing SMS costs was a challenge. Further complicating cost estimates, three stakeholders indicated that implementation costs are spread throughout the organization and that isolating those specifically related to SMS would be difficult. For example, one stakeholder indicated that because the computer systems it purchased to support SMS also support other operational objectives, it has not attempted to separate out SMS costs. In addition, seven stakeholders said that the absence of a final rule specifying SMS requirements made assessing costs difficult. Though stakeholders we spoke with were generally unable to determine the cost of implementing SMS, they were able to identify some types of related expenditures. We asked each of them about specific costs related to SMS implementation, and they reported incurring the following types of costs: 14 trained employees on SMS or SMS concepts, 10 incurred recordkeeping costs, 10 purchased computer systems, and 7 hired new employees. Other costs identified by stakeholders included those related to safety promotion and mitigating hazards identified through SMS implementation. Industry stakeholders we spoke with had concerns about sharing and protecting their safety data; 16 of the 20 stakeholders we interviewed identified this as a challenge, with 7 identifying this challenge as great or very great. In March 2010, the SMS ARC final report identified this as an issue and recommended that prior to the promulgation of an SMS rule, protections be put in place to ensure that safety information and proprietary data are protected from disclosure and use for other purposes, such as enforcement actions. The 2012 FAA Modernization and Reform Act (the 2012 Act) included a provision that placed a limitation on the disclosure of safety information under the federal Freedom of Information Act (FOIA) for information gathered for the purposes of developing and implementing an SMS. However, information provided by or to publicly owned airports is also subject to state FOIA laws, which are not covered under the protections in the 2012 Act. Commercial air carriers, repair stations, and design and manufacturing firms are privately owned and not directly subject to state FOIA laws. However, any data airports collect and any data shared with airports could, according to FAA officials and industry experts, be subject to state FOIA laws, because most certificated airports in the U.S. are owned by a public entity such as a state, city, port, or other local or regional government body. According to officials in the Airports Organization, a federal legislative resolution to override state laws is not a feasible option. These officials reported that in some locations, airports have begun to work with state legislative bodies to address the issue of data protection and disclosure. Without legal protections to prevent data disclosure, stakeholders told us they feel at risk in a variety of potential scenarios related to SMS implementation. Litigation for damage (real or perceived) caused by incidents disclosed through SMS—Eight stakeholders we spoke with, as well as the 2010 SMS ARC, indicated that protections are needed not only to prevent the disclosure of safety data through the state FOIA process, but also from disclosure as part of litigation. For example, a certificated airport must meet the regulatory requirements of 14 C.F.R. Part 139. One standard included in Part 139 is that holes in airport pavement must not exceed 3 inches in depth. An airport representative we met with indicated that if a hole less than 3 inches deep is noted through a reporting system implemented under SMS, the airport may postpone that repair until a later time as it is within the maximum depth allowed under the regulations and the risk is acceptable. Additionally, it would not likely negatively impact the operations of a large aircraft on the runway. However, it could pose a hazard for smaller general aviation aircraft. If the airport does not make that repair and the hole factors into an accident involving a general aviation aircraft, the extent of the airport’s responsibility and legal liability is uncertain and may become the subject of a lawsuit. Because the intent of SMS is to correct hazardous conditions before an accident occurs, stakeholders want protections that address their vulnerability to potential litigation. Misinterpretation of safety data by a third party—If safety data is released to the public under a state FOIA, the aviation industry is exposed to potential misinterpretation, misuse, and further dissemination of the data by third parties. For example, one stakeholder expressed concern that safety data could be obtained, reviewed, analyzed, or disseminated by a third party without the proper context, including how and why the data were collected and the limitations of their use. Without the proper context, this stakeholder stated that dissemination would lead to misinterpretation, which may then be further disseminated by others such as policy makers or the media. Enforcement actions by FAA or other regulatory bodies—As we discussed earlier, the safety risk management component of SMS includes systems that allow individuals to identify potential safety hazards, assess the risks arising from those hazards, and develop mitigation plans to reduce or eliminate those risks. ICAO guidance indicates that in an SMS environment, procedures should be in place to ensure information obtained under SMS will not be used for enforcement actions. However, four stakeholders expressed concern that data about potential hazards could be used beyond SMS. For example, a repair station’s SMS would address identified hazards and, if necessary, develop a mitigation plan to address those hazards. However, if an identified hazard also constitutes a regulatory violation, the company might be subject to FAA enforcement action. The SMS ARC recommended in 2010 that FAA should establish a policy or regulation that provides limits on enforcement actions applicable to information that is identified or produced by SMS. Currently, FAA foregoes civil penalty actions when violations are promptly disclosed to FAA under a voluntary disclosure program, subject to some limitations. FAA believes that the open sharing of apparent violations and a cooperative approach to solving problems will enhance and promote aviation safety. Current FAA policy covers commercial air carriers, repair stations, and design and manufacturing firms, but not certificated airports. It is unclear if FAA will extend this coverage to information provided as part of SMS efforts not currently covered. In 2012, we also found the protection of data to be a concern and recommended that FAA consider strategies to address concerns that may adversely affect data collection and data sharing—essential to realizing the benefits of SMS. FAA began discussions with airports and airport associations in 2013 to determine what type of SMS data should be considered for protection. FAA also received and considered comments from airports and airport organizations about data collection, protection, and sharing in response to the NPRM for airports. We concluded in our September 2012 report that the success of SMS relies heavily on the sharing of safety data and that without appropriate protections, the willingness of the aviation industry to share safety data will likely be jeopardized. Although protections exist for safety data from commercial air carriers, FAA has determined that a similar legislative approach is not feasible for data from certificated airports. However, according to FAA, some airports are working to address data protections under state laws through outreach to state agencies and legislators. Four of the stakeholders we spoke with felt that delays in the rulemaking process were detrimental to their implementation of SMS while others were confident that their SMS implementation would meet forthcoming requirements. Representatives from two industry stakeholders we spoke with told us that they did not want to move forward in implementing SMS until rulemaking is complete, largely because they first want to know what the requirements will be. They explained that they were concerned about the time and resources required to rework the SMS should it not meet the final rule requirements. Fourteen of the stakeholders we interviewed were concerned that the SMS requirement would not be scalable and flexible, that is, one which is sufficient to apply to a broad range of organizations from small operators to large ones with multiple facilities and certifications. Some of the concerns were based on the stakeholders’ views that FAA tends to fashion one-size-fits-all regulations or regulations that focus on one subset of a stakeholder group. For example, stakeholders cited a concern that FAA will establish regulatory requirements that are achievable for larger operators but may require more significant efforts by small operators. The NPRM for the commercial air carrier SMS requirement states that the proposed regulation is designed to be performance-based. FAA explained that because the regulation would be performance-based, it would allow commercial air carriers to comply through a variety of methods (suggesting flexibility) and to accommodate a variety of business sizes and models (suggesting scalability). The NPRM for the SMS requirement at certificated airports states that FAA envisions SMS as an adaptable and scalable system. FAA explained that an SMS could be developed by an organization to meet its unique operating environment. Accordingly, FAA stated in the NPRM that it would prescribe only the general framework for an SMS. Several stakeholders also expressed concerns about how SMS regulations would affect their current efforts to improve safety. For example, one design and manufacturing firm we spoke with reported that it began using system safety techniques similar to those used in SMS in the 1990s. Although this stakeholder participated in an FAA pilot, the firm has developed an SMS approach in house based on the ICAO model. Fourteen stakeholders reported that prior to implementing SMS they had undertaken their own proactive safety or quality assurance efforts. FAA’s SMS ARC recommended in 2010 that SMS regulations allow for the incorporation of existing safety management efforts already in place in order to prevent duplicative safety efforts. FAA indicated that such programs could be used to build an operational SMS. Other stakeholders expressed concern that the content of the final rule may require them to adopt practices that may not improve safety in their company while imposing additional burdens. For example, one stakeholder hoped it would not be required to document every hazard identified and mitigated, stating that such documentation did not necessarily improve safety. We also asked all 20 industry stakeholders which of the four components of SMS would be the most challenging to implement, and more stakeholders (10) cited safety risk management than any other component. As previously discussed, safety risk management is designed to examine a company’s operational functions and operational environment to identify hazards and analyze associated risks. The intent of this process is to focus on the areas of greatest risk from a safety perspective and on mitigation, taking into account such factors as complexity and operational scope. Specifically, stakeholders indicated that conducting a risk analysis can be time consuming. A robust analysis includes identifying a wide range of risks, and determining which would require mitigation—a process that some stakeholders were concerned may keep stakeholder staff away from other duties. A number of stakeholders also noted the difficulty in identifying risks in some cases, including industry changes that may have an impact on risk. Although 12 of the 20 aviation stakeholders we interviewed are continuing with their SMS implementation, they identified actions FAA could take to improve SMS effectiveness and help address the challenges of implementation. These suggestions target SMS training, guidance, and collaboration and communication. Although we did not ask specifically about any internal training programs developed by the 20 aviation stakeholders we spoke with, four stakeholders cited some difficulties with obtaining needed SMS training for their staff, including allocating time and resources for the training and finding training that was specific to their segment of the industry. Further, a 2012 study by the Transportation Research Board of FAA’s airport pilot studies found that pilot participants identified time restrictions and funding as obstacles to developing an SMS training plan for airport personnel. ICAO recommends that civil aviation authorities, like FAA, facilitate the SMS education or training of its stakeholders where feasible or appropriate. To that end, stakeholders suggested an action that FAA could take to mitigate stakeholder difficulties in this area. Specifically, two stakeholders we interviewed felt that allowing their staffs to attend the same training as FAA personnel could be beneficial in addressing differences in regulatory interpretation and for increasing industry’s understanding of FAA’s definitions and concepts as they relate to SMS. According to FAA officials, they have considered the training needs of industry stakeholders and provided some training through the pilot projects. Although, as required by ICAO, FAA has provided, as well as updated, guidance to help stakeholders to implement SMS, some stakeholders saw a need for additional guidance and updates. For example, one stakeholder noted that FAA should create one risk matrix to be used by all organizations. Two stakeholders suggested standardizing SMS definitions across FAA. Another stakeholder suggested that FAA create a single document on the SMS framework. FAA has already responded to these concerns with an April 2012 order designed to establish common language standards for some SMS terminology; the order also provides a single risk matrix for the agency. Stakeholders also suggested that FAA develop additional guidance to aid them in documenting work processes, assessing organizational hazards, and estimating SMS costs. FAA plans to disseminate additional SMS guidance for air carriers and airports when final rules for these industry segments are published. The final rule for air carriers is expected to be published in September 2014. Fourteen of the 20 stakeholder representatives we interviewed were pleased with their collaboration and communication with FAA during the pilot projects; six stakeholders, however, noted that FAA could take additional actions to improve collaboration and communication in line with ICAO’s guidance for SMS. This guidance recommends establishing an appropriate communication platform to facilitate SMS implementation, particularly for SMS requirements and guidance material. FAA has used numerous methods to communicate about and encourage collaboration regarding SMS. These methods include the SMS newsletter, SMS focus groups, participation in the Safety Management International Collaboration Group, and the various SMS pilot projects. However, some stakeholders suggested that additional opportunities exist for the agency to share lessons learned. For example, one stakeholder was not aware of any FAA communications on its progress regarding the data protection and sharing issue, and three others thought that FAA needed to encourage the industry to share lessons learned through existing forums. Two stakeholders suggested using the Commercial Aviation Safety Team as an example of, and the Aviation Safety Information Analysis and Sharing system as a potential tool for, improved SMS collaboration between FAA and commercial air carriers. Two stakeholders also recommended broadening current FAA collaboration and communication efforts by opening membership in SMS focus groups and other working groups to additional industry members, such as foreign air carriers; increasing the number of conferences and meetings where information-sharing can take place; and increasing FAA attendance at these forums. The Airports Organization noted that it continually updates its SMS website, and Flight Standards indicated that it is allowing its most significant forum with the aviation industry, InfoShare, to focus more on SMS implementation and management. Maintaining a high level of safety in the U.S. aviation industry is a shared responsibility among FAA, air carriers, airports, and other stakeholders. FAA continues to implement SMS internally, in accordance with ICAO requirements, and industry stakeholders such as commercial airlines, certificated airports and others have begun to voluntarily adopt SMS in advance of regulations requiring its implementation. The successful implementation of SMS, with its proactive, risk-based approach to maintaining aviation safety, could help ensure the continued safety of the U.S. aviation system. However, the lack of a comprehensive plan for using data, which we recommended in 2010, may limit the effectiveness of SMS once it is implemented. Although FAA has taken steps to implement data quality controls that meet our standards, it has not disclosed its plans for using data collected or to ensure that it has the requisite staffing to make use of the data to be collected from commercial air carriers, certificated airports, and others. We continue to believe that the success of SMS relies on data for maximum benefit and that such a data plan will strengthen FAA’s capability to improve aviation oversight. In addition, the SMS ARC recommended in March 2010 that, among other things, FAA must ensure sufficient planning, guidance, and workforce training are in place prior to SMS implementation. Without providing the necessary training and guidance to its inspector workforce, FAA may not be adequately prepared to ensure the benefits of SMS as industry sectors are required to implement it. Although FAA continues to encourage voluntary implementation of SMS, it also must be prepared to exercise its oversight functions, once final rules requiring SMS implementation are in place, to ensure that industry is developing and utilizing processes to identify, document, and mitigate safety risks. This includes providing clarification and guidance to air carriers, airports and others as they develop their own SMS’s once final rules for each industry segment are in place. To maximize the effectiveness and potential benefits of SMS implementation, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following action: Develop a plan to provide oversight of industry implementation of SMS, a plan that includes providing guidance and training to the relevant FAA inspectors by the time final SMS rules for industry sectors (commercial air carriers, certificated airports, repair stations, design and manufacturing firms) are published. We provided DOT with a draft of this report for review and comment. DOT provided technical corrections and clarifications, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the FAA Administrator, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or any of your staff members have any questions about this report, please contact me on (202) 512-2834 or at dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our objective was to review the implementation of safety management systems (SMS) in the U.S. aviation industry and provide an update on SMS implementation within the Federal Aviation Administration (FAA). To do so, we addressed the following questions: 1. What is the status of SMS implementation within FAA and for key segments of the aviation industry, including certificated airports, commercial air carriers, repair stations, and design and manufacturing firms? 2. What are the key challenges FAA and the aviation industry face in implementing SMS? 3. What additional actions do aviation stakeholders believe FAA could take to improve SMS implementation and potential effectiveness? To assess the status of SMS implementation within FAA and for key segments of the aviation industry, we reviewed FAA orders and advisory circulars on SMS and reports on its SMS pilot projects for commercial air carriers, certificated airports, repair stations, and design and manufacturing firms and from the SMS rulemaking projects for certificated airports and commercial air carriers. We also reviewed FAA information on industry SMS implementation status. In addition, we interviewed FAA officials and industry trade group representatives. To update the status of FAA’s implementation of SMS, we reviewed FAA guidance, implementation plans, and other documents published or revised since our 2012 report and interviewed FAA officials and trade group representatives. To determine the key challenges FAA and the aviation industry face in implementing SMS as well as additional actions FAA and other stakeholders may take to improve the implementation and potential effectiveness of SMS, we interviewed officials from FAA, FAA employee, and industry groups. To obtain an international perspective on the challenges and additional actions, we interviewed representatives from foreign aviation authorities, specifically the European Aviation Safety Agency, Transport Canada, and the Civil Aviation Safety Authority of Australia. In addition, we also interviewed two Canadian airlines and one Canadian airport that have implemented SMS. Table 3 lists industry and international interviewees. To obtain the industry stakeholder perspective for each of our objectives, we collected and analyzed information through structured interviews of a total of 20 industry stakeholders—5 airlines, 5 airports, 5 repair stations, and 5 design and manufacturing firms—that participated in FAA’s SMS pilot projects. We selected the certificated airports based on their hub size, geographic location, and whether we interviewed them for our prior study. We did not include certificated airports that we interviewed from that study. We chose the commercial air carriers based on carrier type (mainline or regional; cargo or charter), the level of SMS implementation they had reached by the time of our interview, and whether we interviewed them for the prior study; two of the air carriers we selected were interviewed for our 2012 study. For the repair stations, we choose five firms based on level of SMS implementation and the aircraft category (e.g., transport, commuter, or acrobatic) to which they provide services. We chose this selection factor as a result of research completed by the Center for Aviation Research, which found that SMS compliance for repair stations could be based on aircraft category. For design and manufacturing firms, we chose five firms based on the extent of SMS implementation, recommendations from industry stakeholders, and types of products they design or manufacture, or both. We conducted all stakeholder interviews with a standardized data collection instrument to maintain consistency across the interviews. To ensure that our interview questions were clear and reliable, we conducted two pretests with knowledgeable individuals and refined the interview questions based on those results. Because these 20 stakeholders comprise a non- representative sample, the results from these interviews cannot be projected to the universe of these industry segments. Table 4 lists these selected industry stakeholders. We conducted this performance audit from May 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions, based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Heather MacLeod (Assistant Director), Amy Abramowitz, James Geibel, David Hooper, Christopher Jones, Delwen Jones, Brooke Leary, Josh Ormond, Pamela Vines, and Elizabeth Wood made key contributions to this report. Aviation: Status of DOT’s Actions to Address the Future of Aviation Advisory Committee’s Recommendations. GAO-13-657. Washington, D.C.: July 25, 2013. Aviation Safety: Status of Recommendations to Improve FAA’s Certification and Approval Processes. GAO-14-142T. Washington, D.C.: October 30, 2013. Department of Transportation: Key Issues and Management Challenges, 2013. GAO-13-402T. Washington, D.C.: March 14, 2013. General Aviation Safety: Additional FAA Efforts Could Help Identify and Mitigate Safety Risk. GAO-13-36. Washington, D.C.: October 4, 2012. Aviation Safety: Additional FAA Efforts Could Enhance Safety Risk Management. GAO-12-898. Washington, D.C.: September 12, 2012. Aviation Safety: FAA Is Taking Steps to Improve Data, but Challenges for Managing Safety Risks Remain. GAO-12-660T. Washington, D.C.: April 25, 2012. Aviation Safety: Enhanced Oversight and Improved Availability of Risk- Based Data Could Further Improve Safety. GAO-12-24. Washington, D.C.: October 5, 2011. Aviation Safety: Certification and Approval Processes Are Generally Viewed as Working Well, but Better Evaluative Information Needed to Improve Efficiency. GAO-11-14. Washington, D.C.: October 7, 2010. Aviation Safety: Improved Data Quality and Analysis Capabilities Are Needed as FAA Plans a Risk-Based Approach to Safety Oversight. GAO-10-414. Washington, D.C.: May 6, 2010.
The U.S. aviation system is one of the safest in the world, reflecting the work of FAA, industry, and others to continually improve safety. To further enhance safety, in 2005, FAA began adopting a proactive, data-driven, risk-based approach to managing safety, referred to as SMS, and has proposed rules that would require SMS implementation for certain segments of the aviation industry. GAO was asked to review SMS implementation in the aviation industry. This report addresses (1) the status of SMS implementation at FAA and in the aviation industry; (2) key challenges that FAA and industry face in implementing SMS; and (3) actions aviation stakeholders believe FAA could take to improve SMS implementation. GAO reviewed FAA documents and interviewed FAA officials. GAO also interviewed representatives from 20 selected aviation stakeholders, including commercial air carriers, certificated airports, repair stations, and design and manufacturing firms. Because the stakeholders were non-statistically selected based on their size, SMS implementation, and the industry segment represented, their views cannot be generalized to the industry or any industry segment. The Federal Aviation Administration's (FAA) Air Traffic Organization completed Safety Management System (SMS) implementation in 2010, and five other FAA organizations are implementing it now. SMS is an approach to collect and analyze safety data to identify hazards, manage risks, and take corrective action before an accident occurs. FAA's implementation activities include developing internal SMS guidance and procedures and using them to, among other things, identify hazards in the aviation system and provide oversight of the aviation industry. For example, FAA's Flight Standards Service is developing an SMS-based oversight system for the commercial air carriers it oversees. Although SMS is not yet required for commercial air carriers, airports, or any other industry segment, some are voluntarily implementing SMS as part of several FAA pilot projects. Of the 83 commercial air carriers, 77 are in the process of implementing SMS. FAA anticipates publishing a final rule in September 2014 requiring commercial air carriers to implement SMS. To a lesser extent, other industry segments are voluntarily implementing SMS. For example, according to FAA, 9 of the nation's largest airports are implementing SMS. FAA issued a proposed rule for airport SMS implementation, but development of a final rule has been delayed, and FAA has not yet determined if it will propose rules for other industry segments. Stakeholders and FAA officials speculated that the other industry segments may be waiting to implement SMS until FAA issues additional guidance or a final rule. According to FAA officials, completing the rulemaking processes for commercial air carriers and airports has been a primary challenge to industry SMS implementation. Officials stated that one reason for delay has been difficulty in developing the benefit-cost analyses required for significant regulatory action. However, FAA is revisiting these analyses through the ongoing rulemaking process. Uncertainty about FAA plans for SMS oversight was among the key challenges for aviation industry SMS implementation. Although some inspector training has been provided, representatives from 9 of the 20 stakeholders GAO interviewed cited concerns that FAA inspectors may not be adequately trained to oversee industry SMS activities, and 6 expressed concerns that inspectors throughout FAA may not consistently interpret SMS regulations. However, FAA has not completed plans for its SMS oversight activities, including inspector training, and officials stated that they would not do so until the final rule is published. Without adequate planning of oversight and training of inspectors, FAA could find itself unprepared to meet its oversight responsibilities when final SMS rules are published. Twelve of the 20 aviation stakeholders GAO spoke with identified additional FAA actions that could improve their SMS implementation efforts. For example, 4 stakeholders stated that providing SMS training to their employees was a challenge, and 2 suggested that FAA could assist by providing them access to FAA's SMS training. FAA indicated that it is considering industry stakeholder training needs and provided training through the pilot projects. Fourteen stakeholders were pleased with FAA's collaboration and communication, but 6 of them stated that this effort could be broadened. FAA updates its SMS website information, and FAA's most significant industry SMS forum is focusing more on SMS implementation. GAO recommends that FAA develop a plan for overseeing industry SMS implementation that includes providing guidance and training for FAA inspectors by the time final rules are published. GAO provided DOT with a draft of this report for comment. DOT provided technical corrections which were incorporated as appropriate.
FBCB2 will be the principal digital command and control system for the Army at the brigade level and below and will constitute the third major component of the Army’s Battle Command System. Currently, the Battle Command System comprises the (1) Global Command and Control System-Army located at strategic and theater levels, which interoperates with other theater, joint, and multinational command and control systems, and with Army systems at the corps and levels below and (2) Army Tactical Command and Control System, which meets the command and control needs from corps to battalion. When fielded, FBCB2 is expected to provide enhanced situational awareness to the lowest tactical level—the individual soldier—and a seamless flow of command and control information across the battle space. To accomplish these objectives, FBCB2 will be composed of a computer that can display a variety of information, including a common picture of the battlefield overlaid with graphical depictions (known as icons) of friendly and enemy forces; software that automatically integrates Global Positioning System data, military intelligence data, combat identification data, and platform data (such as the status of fuel and ammunition); and interfaces to communications systems. Battlefield data will be communicated to and received from users of FBCB2 through a “Tactical Internet.” This is a radio network comprising the Enhanced Position Location Reporting System (EPLRS) and the Single Channel Ground and Airborne Radio System (SINCGARS). By connecting platforms through this Tactical Internet, data needed for battlefield situational awareness and command and control decisions can be made available to commanders at all levels of the Army’s Battle Command System. To explore the FBCB2 concept, the Army acquired and installed sufficient quantities of equipment to field a brigade-size experimental force in June 1996. This experimental force then used FBCB2 prototype equipment in an Advanced Warfighting Experiment, which culminated in March 1997 during a 2-week deployment against an opposing force at the National Training Center, Fort Irwin, California. Results from the Advanced Warfighting Experiment were considered sufficiently positive that the Army conducted an FBCB2 milestone I/II review in July 1997. FBCB2 was conditionally approved for entry into the engineering and manufacturing development acquisition phase (acquisition milestone II) pending completion of certain essential action items, including the final Operational Requirements Document and the Test and Evaluation Master Plan. The program is expected to incur life-cycle costs of about $3 billion (in then-year dollars) by fiscal year 2012. DOD Regulation 5000.2R offers a general model for management of the acquisition process for programs such as FBCB2. This regulation states that managers shall structure a program to ensure a logical progression through a series of phases designed to reduce risk, ensure affordability, and provide adequate information for decision-making. At the start of a program, consideration is given to program size, complexity, and risk and a determination is made regarding acquisition category. More costly, complex, and risky systems are generally accorded more oversight. The determination made at program initiation is reexamined at each milestone in light of then-current program conditions. The regulation describes the differences among acquisition categories and places them in one of three categories: I, II, or III. In general, the milestone decision authority for category I programs is at a higher level than category II or III programs. In addition, category I programs generally require that more information—such as an Analysis of Alternatives and a Cost Analysis Improvement Group review—be available for decision-making. Category I programs are defined as programs estimated by the Under Secretary of Defense for Acquisition and Technology to require eventual expenditure for research, development, test, and evaluation of more than $355 million (fiscal year 1996 constant dollars) or procurement of more than $2.1 billion (fiscal year 1996 constant dollars). Category II programs have lower dollar classification thresholds than category I programs; for example, the research, development, test, and evaluation dollar threshold for an acquisition category II program is $140 million (fiscal year 1996 constant dollars). Category III programs are defined as those which do not meet the criteria for category I or II programs. FBCB2 is currently classified as a category II acquisition program. FBCB2 is currently designated a category II acquisition on the basis of the Army’s estimate of research, development, test, and evaluation costs. As a result, oversight is provided within the Army. We believe that the program should be a category I acquisition on the basis of (1) significance of the program; (2) estimated research, development, test, and evaluation costs; and (3) high schedule risk. The Army acknowledges that the program schedule involves high risk. Throughout the next decade and beyond, the Army plans to modernize its forces through an overarching initiative called Force XXI. Components of the Force XXI initiative are Army XXI, which extends to about the year 2010, and the Army After Next, which is looking beyond the year 2010. Included within the modernization objectives of Army XXI is the integration of information technologies to acquire, exchange, and employ timely information throughout the battle space. In general, integrated situational awareness and command and control information technologies available to Army commanders currently extend through the Army Tactical Command and Control System to tactical operations centers at the brigade and battalion levels. By extending the integration of information technologies to the thousands of soldiers operating outside the tactical operations centers, the Army expects to increase the lethality, survivability, and operational tempo of its forces. FBCB2 is the critical link needed to extend the information to those soldiers. On August 1, 1997, the Deputy Chief of Staff for Operations and Plans announced that the first digitized division would be the 4th Infantry Division and that, at a minimum, fielded equipment would include the Army Training and Doctrine Command’s list of priority one systems and associated equipment. The Training and Doctrine Command has identified 15 priority one systems. They primarily consist of command, control, and communications systems, including FBCB2. It is considered a critical element within the Army’s digitization effort because of the contribution it makes to achieving the required capabilities for the digitized battlefield. Approved by the Joint Requirements Oversight Council of the Joint Chiefs of Staff in January 1995, these capabilities are integrated battle command from platoon to corps, relevant common picture of the battle space at each level, smaller units that are more lethal and survivable, more responsive logistics within and between theaters, and joint interoperability at appropriate levels. It is unlikely that all of the required capabilities of the digitized battlefield can be achieved without FBCB2. However, despite this critical role, the Army has not designated FBCB2 as a category I acquisition—a designation it has given to the other major systems in the Army’s Battle Command System. The significance of this program has also been noted by the Office of the Secretary of Defense, Operational Test and Evaluation Office, which in October 1997 recommended that FBCB2 be elevated to an acquisition category I-D status on the basis of the program’s “significant and far-reaching impact.” That office placed FBCB2 on the same level as the Army’s Maneuver Control System, which is also an acquisition category I-D program. The Maneuver Control System is a key component of the Army’s Tactical Command and Control System that provides automated critical battlefield assistance to commanders and their battle staff at the corps-to-battalion level. The Army’s cost estimate for research, development, test, and evaluation activities, adjusted to fiscal year 1996 constant dollars, is $265.4 million. This estimate covers the period from fiscal year 1997 through fiscal year 2004. However, we believe the Army’s estimate is understated in that other research, development, test, and evaluation costs should be added. As shown in table 1, these costs raise the research, development, test, and evaluation cost estimate above the category I threshold of $355 million. We discussed these figures with Army program officials, and they agreed with $7.2 million of our additional costs, which included partial amounts from the Warfighter Rapid Acquisition Program related to the FBCB2 computer—$2 million of the fiscal year 1997 ($1.4 million) and 1998 ($0.6 million)—and a $5.2-million difference between what was included in the life-cycle cost estimate ($47 million) and the actual budget request ($52.5 million) converted to 1996 constant dollars. Army officials disagreed with the addition of remaining cost categories amounting to $113.1 million on the basis that (1) Army policies and procedures require them to include only those funds obligated by the program office after the establishment of a formal acquisition program; (2) FBCB2-related funds obligated by other program managers, such as the Abrams and Bradley managers, should be excluded; and (3) costs directly related to test and evaluation activities for acquisition category II, like FBCB2, are identified in the Army’s Operational Test and Evaluation Command’s Support of Operational Testing program element. Our assessment of the Army’s arguments follows. The Army Digitization Program provided $47.6 million for FBCB2 research, development, test, and evaluation activities through fiscal year 1996. The funds used were to buy FBCB2 prototype hardware and software used in the Advanced Warfighting Experiment at the National Training Center. Army officials stated that these funds were obligated prior to the establishment of the FBCB2 acquisition program and thus should not be included in this cost estimate. We found that the Army had included these funds in its total life-cycle cost estimate and, while the source of the funds was the digitization program element, the explanation to the Congress in appropriate descriptive summaries shows the funds were needed for activities related to the development of FBCB2 hardware and software. Therefore, we believe these funds should be included in the derivation of the FBCB2 research, development, test, and evaluation cost estimate. Our analysis shows that $2.8 million in fiscal year 1997 funding and $1.9 million in fiscal year 1998 funding were specified for FBCB2 platform (shown as Applique in table 1) integration activities and obligated by Abrams and Bradley program managers. Army officials stated that a new Army regulation requires that all platform-related costs be identified as part of the total platform cost and that these funds were given to and obligated by the Abrams and Bradley program offices. However, the Army obtained these funds from the Warfighter Rapid Acquisition Program on the basis that they would be used to provide an improved design that was not part of the original FBCB2 budget. Additionally, when requesting these funds, the Army stated that, without this funding, FBCB2 would be at risk of not meeting its fiscal year 2000 deadline. In our opinion, since these funds were specifically requested, used, and obligated for FBCB2, they should be considered part of the total research, development, test, and evaluation cost estimate. Our analysis also shows that $7.8 million in fiscal year 1997 and $7.7 million in fiscal year 1998 were requested to complete system engineering and integration work on the Tactical Internet. According to Army officials, these funds were obligated by program managers for Tactical Radio Command Systems and Warfighter Information Network-Terrestrial and, since they were not controlled or obligated by the FBCB2 program manager, should not be included in the estimate. We believe these funds should be included as part of the FBCB2 research, development, test, and evaluation cost because the Army justified its need for these funds on the basis that they would be used to correct known shortcomings and make the Tactical Internet compatible with the evolution of the FBCB2 software development effort. In describing the critical nature of the funding, the Army concluded that without the Tactical Internet there would be no FBCB2. We also found that interface funding is specifically characterized in the fiscal year 1999 Army descriptive summary for the Digitization Program element as needed to complete integration, procure prototypes, and initiate testing of FBCB2 in the M1A1 Abrams, the M1A2 Abrams with system enhancements, and the M2A2 Bradley Operation Desert Storm configurations. Therefore, we believe these funds are more appropriately categorized as FBCB2-related rather than research, development, test, and evaluation activities unique to the Abrams or Bradley platforms. According to Army policy, test and evaluation costs associated with a category I program are included in the program element. Since we believe FBCB2 should be classified as a category I acquisition, we included $8.5 million in fiscal year 1998 for the FBCB2 Limited User Test, $15.4 million in fiscal year 1999 for the FBCB2 Initial Operational Test and Evaluation, and $7.5 million in fiscal year 2000 for the FBCB2 Initial Operational Test and Evaluation. We were unable to determine the estimated costs for Force Development Test and Evaluation. Had we been able to do so, these costs would also be included in our estimate. Our belief that FBCB2 is justifiably a category I acquisition on the basis of cost is shared by an office in the Under Secretary of Defense for Acquisition and Technology. In November 1997, the Director, Test, System Engineering, and Evaluation, recommended that FBCB2 be designated a category I-D program because of “significant integration risks with other major systems and the potential dollar thresholds involved.” The Director noted that cost estimates do not include communications and integration costs that potentially will drive the program above category II thresholds. We believe examples of these types of costs discussed in this report are communication costs associated with the Tactical Internet and integration costs associated with the Abrams and Bradley platforms. Army program management officials expressed concern about a category I-D designation for the FBCB2 program because it would require the insertion of formal oversight review milestones, with their consequent resource demands, into an already risky schedule. However, our recent discussions with these officials disclosed that issues of cost estimates and acquisition category are still being explored. For example, a comprehensive Army cost estimate, currently being developed with help of the Cost Analysis Improvement Group, is expected to be available by September 1998. According to these officials, the FBCB2 Overarching Integrated Product Team is trying to reach a consensus on a recommendation regarding the appropriate amount of oversight required for the program. That recommendation may await the outcome of the Army’s cost estimate effort currently being developed. To achieve the Army’s end of fiscal year 2000 schedule, the FBCB2 program will need to pass a series of tests, including two operational tests. Additionally, new versions of two weapon systems participating in the FBCB2 operational tests will be concluding their own testing just prior to the start of FBCB2 operational testing. The Army acknowledges that the program schedule involves high risk. However, despite this acknowledged schedule risk, the Army is moving ahead with its highly compressed schedule without specifically addressing the implications of not fielding an adequately developed system by the end of fiscal year 2000. In its effort to move the program rapidly along to meet the year 2000 implementation deadline, the Army is making decisions that may prove troublesome later in the acquisition. In this regard, we found that the development of critical acquisition documentation and plans are experiencing significant delays. For example, in July 1997 the Army made the decision to move FBCB2 to acquisition milestone II (Engineering and Manufacturing Development) contingent on completion of the Operational Requirements Document and the Test and Evaluation Master Plan by November 1, 1997. In November 1997, these actions had not been completed and the new expected approval date for these documents slipped to March 1998. Our discussions with Army officials now indicate that these documents are not expected to be complete and approved by the Joint Requirements Oversight Council until July 1998. This means that the Army is currently relying on a December 1997 Training and Doctrine Command-approved Operational Requirements Document as the basis for the program until it is replaced by the Joint Requirements Oversight Council-approved Operational Requirements Document. Therefore, the requirements process is expected to conclude only 1 month prior to the start of the first FBCB2 operational test—Limited User Test—in August 1998. Further, to meet the Army’s fiscal year 2000 schedule, the FBCB2 program will need to successfully complete a series of tests, including two operational tests. Each test requires different versions of software for each of the two hardware components—the computer and the communications interface unit. The second operational test also requires that FBCB2 software be successfully integrated into the new digitized versions of the Abrams tank and the Bradley Fighting Vehicle. The new versions of these platforms will be concluding their own independent operational test and evaluations—to demonstrate the capability of the platforms as weapon systems—just prior to the start of the FBCB2 Initial Operational Test and Evaluation. These scheduled activities are shown in figure 1. As shown in figure 1, between now and the planned fielding in fiscal year 2000, FBCB2 will undergo two field tests, two operational tests, and one force development test. Throughout the test period, four different versions of software for the computer and communications interface unit will be used, with a fifth version actually fielded to the first digitized division. In an effort to reduce risk, the Army will be employing “spiral software builds” throughout the test period. According to program officials, spiral software builds increasingly integrate the data from other systems, such as the Army Tactical Command and Control System, into the FBCB2 system. Each version is expected to add new functionality into the previous versions, thus building upon the existing baseline. A field test is currently being conducted prior to the start of the Limited User Test and another will be held in 1999 prior to the Force Development Test and Evaluation. The main objectives of these field tests are to determine FBCB2 readiness for the Limited User Test and the Force Development Test and Evaluation and to make necessary modifications to the FBCB2 software. The first operational test will be the Limited User Test scheduled for the last quarter of fiscal year 1998. Its main objective is to test new hardware and software developed since the conclusion of the Task Force XXI Advanced Warfighting Experiment. The new version of the FBCB2 computer is called “Applique +.” One limitation of the test is that only “appliqued platforms”—the Abrams M1A1D and the Bradley M2A2 Operation Desert Storm configurations—will be used. No newer digitized platforms, such as the Abrams M1A2 or the Bradley M2A3 configurations (which require FBCB2 embedded battle command software only), will be used. The Force Development Test and Evaluation is scheduled for the last quarter of fiscal year 1999. The purpose of the test is to evaluate the tactics, techniques, and procedures established for two digitized brigades of the 4th Infantry Division. At this point, it is not clear which configurations of weapon platforms will participate in this test. The second operational test is the Initial Operational Test and Evaluation for FBCB2 and is scheduled for the first quarter of fiscal year 2000. The testing is intended to demonstrate that the FBCB2 system is operationally effective, suitable, and survivable, and the results will be used to support the FBCB2 production decision. While it is expected that some Abrams and Bradley configurations using the FBCB2 embedded battle command software will be available for this test, the latest draft version of the FBCB2 Test and Evaluation Master Plan acknowledges that not all embedded FBCB2 platforms (for example, Land Warrior, Paladin, Crusader, and selected aviation platforms) are expected to be available to participate in the test. The majority of these platforms are still in development and cannot be tested until follow-on operational test and evaluation events. In addition to the various software versions, the Army will be introducing new versions of two radios into the test events—an Advanced SINCGARS System Improvement Program radio and the EPLRS Very High Speed Integrated Circuit radio. Although the development of these radios has been closely coordinated with the demands of the Tactical Internet and FBCB2, they remain separately managed and funded programs. Synchronizing the radios’ schedule with FBCB2’s aggressive schedule remains a challenge. Overlaying the introduction of new hardware, software, and radios will be new doctrine, tactics, techniques, and procedures associated with using these new capabilities. We believe that the introduction of so many new and diverse elements—hardware, software, radios, doctrine, tactics, techniques, and procedures—over the 18-month period of testing, coupled with the Army’s expectation that the first division will be equipped by the end of fiscal year 2000, results in a highly complex and aggressive FBCB2 schedule. Both the Army Digitization Office and FBCB2 program office officials acknowledge that the aggressive schedules to mature and integrate multiple systems pose a high risk for successful program completion. In our opinion, risk is further heightened because there is no apparent risk mitigation strategy addressing the implications of the Army’s not meeting the goal of having a functional digitized division by the end of fiscal year 2000. Compounding the FBCB2 schedule risk is the test schedule for the only two weapon platforms scheduled to be involved in FBCB2 initial operational testing. The M1A2 Abrams with system enhancements and the M2A3 Bradley will be undergoing their own independent operational testing during the FBCB2 engineering and manufacturing development phase. Specifically: The M1A2 Abrams tank with system enhancements is scheduled for a follow-on operational test and evaluation April-July 1999. As a risk mitigation measure, an early version of the FBCB2 embedded battle command software, version 1.02b, will be used to evaluate the interface between FBCB2 and the platform software. Command and control functionality will not be tested until the FBCB2 Initial Operational Test and Evaluation in October 1999. The M2A3 Bradley Fighting Vehicle is also scheduled for an Initial Operational Test and Evaluation April-July 1999. The Bradley test will not use any FBCB2 software. As with the Abrams, command and control functionality will not be tested until the FBCB2 Initial Operational Test and Evaluation in October 1999. For FBCB2 operational testing, both Abrams and Bradley platforms will use embedded battle command software version 3.1. Officials from both the Abrams and Bradley offices highlighted the development of the interface between their intravehicle digitized systems and the FBCB2 software as a concern. According to these officials, the newer versions of the Abrams and the Bradley are already digitized in that they have an on-board data processing capability, including mission-critical software. These officials were uncertain about the impact of introducing the FBCB2 software into the platforms. Training and fielding concerns were also expressed by these officials. Abrams officials further noted that their experiences indicate that crews need about 12 months to practice with new software versions before they become proficient. Under the current test schedule, crews would have only 3 months to become proficient before the FBCB2 Initial Operational Test and Evaluation. Since the FBCB2 program has only recently entered engineering and manufacturing development and is scheduled to undergo about 18 months of testing, no operational evaluations are yet available for analysis. However, a prototype of the system participated in the Task Force XXI Advanced Warfighting Experiment, which concluded in March 1997. The experimental results were analyzed by the Army’s Operational Test and Evaluation Command and DOD’s Director, Operational Test and Evaluation. The Army’s Operational Test and Evaluation Command’s comprehensive Live Experiment Assessment Report offered various assessments of the FBCB2 prototype. The report candidly discussed poor message completion rates, difficulty with message formats, and the limitations of the experimental hardware and software. The report also acknowledged that potential exists for future improvements. The report offered the following recommendations for the continued development and maturity of the FBCB2 system: (1) continuing to experiment with Applique/FBCB2 using other interface devices, evolving to a voice activated, hands-free system; (2) determining the most critical/useful functions and eliminate noncritical functions; (3) improving vehicle hardware integration; and (4) continuing to develop and mature the Applique Combat Service Support functions. The Director, Operational Test and Evaluation, through the Institute for Defense Analyses, assessed and evaluated the battlefield digitization aspects of the Task Force XXI Advanced Warfighting Experiment in order to achieve early operational insights before the beginning of formal operational testing. Specific systems observed were the Applique and the Tactical Internet. The oversight effort was conducted in partnership with the Army’s Operational Test and Evaluation Command, in recognition of the unique nature of the experiment (as distinct from an operational test). The Director’s report also identified a lack of (1) adequate digital connectivity; (2) maturity of the Applique and the Tactical Internet; (3) adequate tactics, techniques, and procedures for operations with digital equipment; and (4) tactical skills resulting from inadequate unit collective training. The report recommended continued oversight and evaluation of the upcoming operational tests of FBCB2. Army program officials currently assess the program’s technical risk as medium. Even though FBCB2 is one of the Army’s top priorities and a key component of the systems needed to field the first digitized division, the Army has not designated the program as a category I acquisition. The Army believes that the program does not meet the required dollar threshold for a category I acquisition on the basis of total research, development, test, and evaluation costs. Program management officials have also expressed concern that the additional review and data collection requirements associated with a category I designation would delay the program. They contend that such a delay would prevent them from achieving the goal of fielding the first digitized division by the end of fiscal year 2000. In our opinion, the significance of the program; its estimated research, development, test, and evaluation cost; and the high schedule risk are compelling reasons for greater oversight. Accordingly, we believe elevating the program to a category I designation would help ensure that adequate management information is developed and provided to decisionmakers to reduce risk, ensure affordability, and better achieve the objectives of DOD Regulation 5000.2R. Therefore, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition and Technology to consider our analysis of the FBCB2 program and make a determination of whether it should be appropriately characterized as an acquisition category I-D on the basis of its significance to the Army’s battlefield digitization goal, the costs we discuss in this report, schedule risk, the new Army cost estimate expected to be available by the end of this fiscal year, and the benefits of prudent oversight and analyze, regardless of eventual category designation, the risks and likely immediate benefits associated with equipping a division with an FBCB2 capability by the end of fiscal year 2000 and provide guidance to Army acquisition executives on managing those risks. In commenting on a draft of this report, DOD did not agree nor disagree with our recommendations. In its response, DOD made two points. First, DOD indicated that Overarching Integrated Product Teams—chaired by high level DOD officials—are addressing the issues discussed in our report and that a decision would be made on acquisition level categorization by the fourth quarter of fiscal year 1998. Second, DOD stated that risk management efforts and digitization benefits are continuing to be discussed. DOD described illustrative risk mitigation activities developed by Army officials and reiterated its support of the Army’s digitization efforts. While it appears that the FBCB2 acquisition category issue will be resolved by the end of this fiscal year, we remain concerned about the cost, schedule, and performance risks associated with equipping a division by the end of fiscal year 2000 and the implications of not fielding an adequately developed system by that deadline. We continue to believe that this program should be designated an acquisition category 1-D and that departmental guidance should be provided to the Army on managing the risks of not meeting such a short-term mandated deadline. DOD’s comments are reprinted in their entirety in appendix I, along with our evaluation. In addition, DOD provided technical comments that have been incorporated, as appropriate, in the report. To evaluate the significance of the FBCB2 program, we reviewed the objectives of the Army XXI and Army After Next initiatives, the priority of FBCB2 within the Army’s digitization programs, system comparability with other Army command and control programs, and an assessment of FBCB2’s significance prepared by the Office of the Secretary of Defense’s Operational Test and Evaluation Office. We also analyzed early Army actions to maintain the system’s schedule for equipping the first digitized division. To evaluate program cost estimates, we reviewed the Army’s life-cycle cost estimate; converted research, development, test, and evaluation estimates to fiscal year 1996 dollars; compared the fiscal year 1999 FBCB2 budget request with amounts contained in the life-cycle cost estimate; analyzed the fiscal year 1997 and 1998 amounts appropriated to the Army for FBCB2-related Force XXI Initiatives; and developed estimates of costs incurred by Abrams and Bradley program managers for FBCB2-related activities and test and evaluation costs funded outside the FBCB2 program element. We also analyzed early program cost experiences, particularly the reprogramming action requested for the fiscal year 1998 FBCB2 unfunded requirement. To evaluate the feasibility of the Army’s fielding schedule, we analyzed the events within the FBCB2 schedule; discussed the events with appropriate officials, including representatives of the Abrams and Bradley program offices; and obtained assessments of the risks associated with fielding an FBCB2 capability to an Army division by the end of fiscal year 2000. In reviewing experimental performance results of the FBCB2 prototype at the Task Force XXI Advanced Warfighting Experiment, we considered the Army’s Operational Test and Evaluation Command’s Live Experiment Assessment Report and the Director, Operational Test and Evaluation, briefing on early operational insights. In addition, in March 1997, prior to the request for this work, we attended the Force XXI Advanced Warfighting Experiment at Fort Irwin and accompanied representatives of the Operational Test and Evaluation Command to observe and obtain first hand knowledge of the performance of FBCB2 and other initiatives being tested. We also attended after action sessions in which activities carried out during the exercise were evaluated by top commanders. In the course of our work, we also interviewed program officials and examined program management and budget documents, draft system requirements, draft test plans, acquisition plans, and other program documentation. We performed work primarily at the Army Digitization Office, Arlington, Virginia, and the Army Communications and Electronics Command, Fort Monmouth, New Jersey. We also gathered data from the Army Tank Automotive and Armaments Command, Warren, Michigan; Director, Operational Test and Evaluation, Arlington, Virginia; Director, Test, Systems Engineering, and Evaluation, Arlington, Virginia; Army Operational Test and Evaluation Command, Alexandria, Virginia; and the Division XXI Advanced Warfighting Experiment, Fort Hood, Texas. Because the FBCB2 Operational Requirements Document is not yet final, we were unable to review an approved version of program requirements. We performed our review from September 1997 to April 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to other appropriate congressional committees; the Director, Office of Management and Budget; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. Copies will also be made available to others upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. The major contributors to this report were Charles F. Rey, Robert J. Dziekiewicz, and Paul G. Williams. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated June 5, 1998. 1. DOD commented that through Integrated Product Team meetings the issues such as program significance, cost, and schedule risk—discussed in our report—are being addressed. Although DOD did not elaborate on how the teams were addressing the issues of significance or schedule risk, it did acknowledge that the Office of the Secretary of Defense’s Cost Analysis Improvement Group is currently working with the Army’s Cost and Economic Analysis Center to validate the Force XXI Battle Command, Brigade and Below (FBCB2) program costs. This effort is expected to be completed by the fourth quarter of fiscal year 1998. In our opinion, the results of this analysis, as well as the information we have presented on program significance and schedule risk, should be considered in developing the actions taken in response to our recommendation. 2. DOD commented that the spiral software development, the series of tests that have started or are scheduled to be conducted prior to the October 1999 Initial Operational Test and Evaluation, and guidance from the Overarching Integrated Product Team all provide some degree of risk management. We continue to believe these actions do not constitute an adequate risk mitigation strategy for the reasons discussed in the body of our report and summarized as follows Even with the guidance of the Overarching Integration Product Team, the fact that so many system development tests are being compressed to meet a 18-month schedule because of the mandated fiscal year 2000 deadline is, in our view, a high risk approach to successful system development. The spiral software development model discussed by DOD will not guarantee success. Even with users involved during the frequent tests, it is unlikely that there is enough time between tests for DOD to adequately correct discovered deficiencies and implement other desired changes. Further, DOD states that a working group is planned to evolve this spiral development concept for software in the spirit of acquisition streamlining. We believe that the time for evolving this concept, as it relates to FBCB2, is past, and concentrated effort must be focused on successfully completing the scheduled tests and containing escalating costs. DOD is proceeding with FBCB2 development on the basis of an Operational Requirements Document and a Test and Evaluation Master Plan, which are still in the process of being reviewed for approval by the Joint Requirements Oversight Council. This, in our opinion, is another impediment to adequate risk mitigation because DOD is attempting to develop a system that may or may not be addressing appropriate requirements. We still believe that the discussion in our report on these issues supports the need for DOD and the Army to follow the more formal approach to risk mitigation planning as required by DOD Regulation 5000.2R for acquisition I programs. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Army's acquisition plans for the Force XXI Battle Command, Brigade and Below (FBCB2) program, focusing on the: (1) program's significance to the Army's battlefield digitization goal; (2) Army's derivation of cost estimates; and (3) feasibility of the Army's fielding schedule. GAO noted that: (1) on the basis of the Army's estimate of FBCB2 research, development, test and evaluation costs, the program has been classified as a category II acquisition--one that does not require systematic oversight by the Under Secretary of Defense for Acquisition and Technology; (2) GAO believes that because of the FBCB2's significance, cost, and schedule risk, the FBCB2 should be classified as a category I acquisition and receive a higher level of oversight; (3) although FBCB2 is critical to the Army's digitization plan--the system ties the upper level command and control systems to the digital battlefield--FBCB2 is the only major system in the Army's Battle Command System that has not been designated category I; (4) the system's potential to provide thousands of soldiers with the ability to send and receive clear and consistent battlefield information in almost real time demonstrates the system's significance as a linchpin of the digital battlefield; (5) this significance is confirmed by the Army's own designation of FBCB2 as one of the highest priority command and control systems and the Army's plan to equip a division with a FBCB2 capability by the end of fiscal year (FY) 2000; (6) GAO's analysis indicates that there are additional research, development, test, and evaluation costs that, when included, increase the dollar significance of this program to a category I acquisition level; (7) the FBCB2 program faces a significant schedule risk in meeting the FY 2000 mandate for fielding the first digitized division; (8) however, despite this acknowledged schedule risk, the Army is moving ahead with its highly compressed schedule with no apparent risk management strategy specifically addressing alternatives and the implications of not fielding an adequately developed system by the end of FY 2000; (9) because the FBCB2 program has only recently entered engineering and manufacturing development, no operational evaluations are yet available for analysis; (10) however, the 1997 Task Force XXI Advanced Warfighting Experiment employed a prototype FBCB2; (11) two independent organizations, the Army's Operational Test and Evaluation Command and the Office of the Secretary of Defense, Operational Test and Evaluation Office, assessed FBCB2 results and found a number of problems; (12) these included poor message completion, limitations related to the experimental hardware and software, a lack of adequate digital connectivity, immaturity of the Applique--the Army's name for FBCB2 computer--and the Tactical Internet, and inadequate training; and (13) Army officials currently assess the program's technical risk as medium.
JSF is a joint, multinational acquisition program for the Air Force, Navy, Marine Corps, and eight cooperative international partners. The program began in November 1996 with a 5-year competition between Lockheed Martin and Boeing to determine the most capable and affordable preliminary aircraft design. Lockheed Martin won the competition, and the program entered system development and demonstration in October 2001. The program’s objective is to develop and deploy a technically superior and affordable fleet of aircraft that support the warfighter in performing a wide range of missions in a variety of theaters. The single-seat, single- engine aircraft is being designed to be self-sufficient or part of a multisystem and multiservice operation, and to rapidly transition between air-to-surface and air-to-air missions while still airborne. To achieve its mission, JSF will incorporate low observable technologies, defensive avionics, advanced onboard and offboard sensor fusion, internal and external weapons, and advanced prognostic maintenance capability. According to DOD, these technologies represent a quantum leap over legacy tactical aircraft capabilities. At the same time, the JSF aircraft design includes three variants: a conventional take-off and landing variant for the Air Force; an aircraft carrier-suitable variant for the Navy; and a short take-off and vertical landing variant for the Marine Corps and the United Kingdom. JSF is intended to replace a substantial number of aging fighter and attack aircraft in DOD’s current inventory. In 2003 the JSF program system integration efforts and a preliminary design review revealed significant airframe weight problems that affected the aircraft’s ability to meet key performance requirements. Software development and integration also posed a significant development challenge. The program’s inability to meet its ambitious goals resulted in the Department’s failing to deliver on the business case that justified initial investment in JSF. As a result, purchase quantities have been reduced, total program costs have increased, and delivery of the initial aircraft has been delayed. These changes have effectively reduced DOD’s buying power for its investment as it will now be buying fewer aircraft for a greater financial investment. It is too late for the program to meet these initial promises. To its credit, in fiscal year 2004, DOD rebaselined the program extending development by 18 months and adding resources to address problems discovered during systems integration and the preliminary design review. Program officials also delayed the critical design reviews, first flights of development aircraft, and the low-rate initial production decision to allow more time to mitigate design risk and gather more knowledge before continuing to make major investments. Table 1 shows the evolution of cost and delivery estimates from the start of the program up to the latest official program information as of December 2005. Since establishing a new program baseline in fiscal year 2004, JSF program costs have risen and key events have been delayed. JSF program costs have increased by $31.6 billion since the program’s decision to rebaseline in fiscal year 2004. This includes a $19.8 billion increase in costs since our report last year in March 2006. The program has experienced delays in several key events including delays in the start of the flight test program, manufacturing and delivery of the first development aircraft, and delays in the testing of critical missions systems. These delays reduce the amount of time available for completing flight testing and development activities. The program projects that it will meet its key performance requirements except for one dealing with the warfighter’s ability to fully interoperate with other platforms. Projections are based largely on engineering analysis, modeling, and laboratory testing, and a 7-year test program to demonstrate performance just started in December 2006. JSF program cost estimates have increased by $31.6 billion since the program’s decision to rebaseline in fiscal year 2004. During this period, estimates in some cost areas grew by $48 billion but were offset by $16.4 billion due to quantity changes and the proposed termination of an alternate engine program. According to the program, the cost estimate is still mostly based on cost estimating relationships—like cost per pound— not actual costs and, therefore, is subject to change as the program captures the actual costs to manufacture the aircraft. Also, the official program estimate is based on the program’s December 31, 2005, Selected Acquisition Report delivered to Congress in April 2006. We could not review the most recent estimated costs of the JSF program. This information is being used by the Office of the Secretary of Defense in preparing its fiscal year 2008 budget request as well as for the program’s Selected Acquisition Report dated December 31, 2006, expected to be delivered to the Congress in early April 2007. Although the most recent estimates were not available for this review, we expect that, unless program content is changed, future cost estimates will be higher based on the history of similar acquisition programs and the risks that remain in the program. Table 2 shows the changes to the program’s costs since the rebaseline in fiscal year 2004. Since our last report, the program estimated a $19.8 billion net increase in its total program costs. The majority of the cost growth, over 95 percent, was for procurement. According to the program office, several factors led to an increase in the procurement cost estimate. The most significant increases include: $10.3 billion—result of design and manufacturing changes to large bulkheads in the wing section of the aircraft, need for 6 times more aluminum and almost 4 times more titanium than originally estimated. At the same time, titanium costs almost doubled. $3.5 billion—result of reduced manufacturing efficiency because of plans to build a certain number of wings at a new subcontractor. $5.5 billion—result of changing the business relationship of the prime and two major subcontractors. $4.4 billion—result of projected higher support costs. $14.7 billion—result of changing assumptions for estimating labor rates and inflation. The increases in procurement costs were offset by two main factors. First, the cost estimate reflects production efficiency benefits of $9.2 billion from producing 508 international partner aircraft that were not included in previous estimates. Secondly, the program reduced procurement costs by $5.1 billion as a result of the proposed elimination of the alternate engine program. According to the program office, it expected savings from manufacturing efficiencies by having one engine contractor producing a larger quantity of engines. Program officials stated that they have had difficulty quantifying cost savings that might accrue from competing engine buys between contractors. For now Congress has reinstated the alternate engine program and has required further analysis from DOD and others on the costs of the program. The program also reported that development costs decreased by $1.2 billion. The reduction in development costs was due almost entirely to the removal of the remaining estimated costs to complete the alternate engine’s development. Again, Congress has since reinstated funding for the alternate engine program. The net effect of the JSF program cost increases is that DOD will pay more per aircraft than expected when the program was rebaselined. The average procurement unit costs have increased from $82 million to almost $95 million and the program acquisition unit costs has increased from $100 million to over $112 million. Since the JSF program was rebaselined, it has experienced delays in several key development activities but without corresponding changes to the end of development. Holding firm to these dates forces the program to find ways to complete development activities in less time, especially if problems are discovered in the remaining 6 years of development. The program office is evaluating different ways to reduce the risk of this compression by being more efficient in its flight test activities. The first JSF flight was scheduled for August 2006 but did not occur until mid December 2006—about 4 months later than expected. According to the program office, the first flight was successful but was shortened because of a problem with instrumentation on the aircraft. Although the first aircraft will be able to demonstrate some performance—limited flying qualities, propulsion, and vehicle subsystems—it is not a production representative aircraft with fully functioning critical mission systems or the design changes from the rebaselined program that reduced airframe weight. The first flight of a production representative aircraft has been delayed 8 months to May 2008. This aircraft will be a short take-off and vertical landing variant and will incorporate the design changes from the rebaselined program. According to the latest program information, the first fully integrated, capable JSF is scheduled to begin testing in the early 2012 time frame, a delay of several months. The first flight of a JSF with limited mission capability has been delayed 9 months. The estimate for first flight of a production representative conventional take-off variant has been delayed 11 months to January 2009 and the first flight of a carrier variant has been delayed by as much as 4 months to May 2009. The flying test bed, also critical to reducing risk in the flight test program, has been delayed about 14 months to late 2007. This aircraft is a modified Boeing 737 that will be equipped with the sensors and mission system software and hardware. The test bed will allow the program to test aircraft mission systems such as target tracking and detection, electronic warfare, and communications. Figure 2 shows schedule delays and the compression in the development schedule. The program has completed manufacturing of its first development aircraft and manufacturing data indicates that the program did not meet its planned labor hour goals. Manufacturing data on subsequent development aircraft that have begun manufacturing indicate these aircraft are not currently meeting their planned manufacturing efficiencies either. According to contractor data as of November 2006, the first development aircraft had required 35 percent or 65,113 more labor hours than expected. The program encountered most of the inefficiencies in the mate and delivery phase and with the fabrication of the center fuselage and wing. Figure 3 shows the planned hours versus the actual hours needed for completing the first test aircraft. When the first aircraft began manufacturing, the program had released about 20 percent of the engineering drawings needed for building the aircraft. This led to a backlog of drawings, negatively impacting the availability of parts needed for efficient manufacturing operations. To compensate for delays and parts shortages for production, components of the aircraft were manufactured out of sequence and at different manufacturing workstations than planned. For example, the wing section was mated to the center fuselage before work on the wing was completed. The wing was only 46 percent complete and still required more than 18,500 hours of work. Because this remaining work was completed at a different workstation than was planned, contractor officials stated that major tooling—such as a stand that supports the wing structure upright to allow workers to install wiring and other parts—was not available for use. As a result, workers were required to lie on the ground or bend under or over the wing structure to complete the wing assembly, significantly increasing the number of hours needed to complete this effort. According to Defense Contract Management Agency, out-of-station work performed on the wing required an additional 46 percent more hours than planned. Late delivery of parts and late qualification of subsystems were the major drivers to the mate and delivery inefficiencies, more than doubling the hours needed to complete this activity. Lockheed Martin, the prime contractor, appears to be focused on developing an efficient and effective manufacturing process for the JSF, but it is still very early in that process. The development aircraft now in manufacturing are not currently meeting their planned efficiencies. As with the first test aircraft, the program does not expect to manufacture the development aircraft in the planned manufacturing sequence. The program expects to move some wing fabrication activities to final assembly and do both fabrication and final assembly concurrently. Early development aircraft are already experiencing inefficiencies and delays. As of December 2006, wing manufacturing data for one of these aircraft shows the program had completed less than 50 percent of the activities expected at this time while requiring 41 percent more hours than planned. According to the contractor and program officials, these inefficiencies are largely due to late delivery of the wing bulkheads because of a change in their manufacturing process. The Defense Contract Management Agency has rated manufacturing as high risk, stating that the primary cause of risk is the late delivery of parts to properly support the manufacturing work flow. It projects further delays to schedule, increased costs, and subsequent out- of-sequence work. An early indicator of design stability is the completion of design drawings at the critical design review. In February 2006, the program held its critical design review for production representative conventional and short take- off and vertical landing aircraft. At that time, the program had completed 47 percent of the short take-off aircraft design and 3 percent of the conventional aircraft design. Our previous best practices work suggests that completion of 90 percent of a product’s engineering drawings provides tangible evidence that the design is stable. As with the first aircraft, the program has experienced late releases of engineering drawings, which has delayed the delivery of critical parts from suppliers to manufacturing for the building of the initial aircraft. For example, based on program data as of October 2006, more than one-third of the drawings needed to complete these two variants are expected to be released late to manufacturing. Although the first aircraft encountered manufacturing inefficiencies, the JSF Program and the contractor have pointed to some successes in this initial manufacturing effort. For example, they have stated the mate of the major sections of the aircraft was more efficient than in past aircraft programs because of the state-of-the-art tools used to design the aircraft and develop the manufacturing process. Likewise, they have indicated that they have experienced fewer defects in this first aircraft than experienced on legacy aircraft. We would agree that the contractor has made progress in demonstrating the use of several large tools and fabrication processes in building the first test aircraft. However, a key factor in developing an efficient and effective manufacturing process is a mature aircraft design. Major design modifications can cause substantial and costly changes in the manufacturing process. For example, since the first aircraft entered production, the manufacturing process has had to be altered due to redesigning required to resolve weight and performance problems. According to Defense Contract Management Agency officials, some tools already bought and in place were either no longer useful or being used less efficiently. New tools had to be procured and the manufacturing process had to change. The Defense Contract Management Agency noted that these additional tooling costs were about $156 million. Contractor officials stated that the current manufacturing capacity is sufficient to produce about 24 aircraft per year. Given that only one aircraft has been built and essentially all of the flight and static and durability testing remains to be done there is still significant risk that the JSF design for each of the three variants will incur more changes as more design knowledge is gained. Currently, the JSF program estimates that by the time the development program ends the aircraft design will meet all but one of its key performance parameters. The performance estimates to date are based on engineering analyses, computer models, and laboratory tests. Key performance parameters are defined as the minimum attributes or characteristics considered most essential for an effective military capability—for the JSF there are eight parameters. The program office estimates that seven of the eight key performance parameters are being met. The aircraft is currently not meeting its full interoperability performance parameter due to a requirement for beyond-line-of-sight communications. Meeting the full interoperability required is currently dependent on other capabilities being developed outside the JSF program. Most ground and flight tests will have to be completed before all the key performance estimates are confirmed. At this time, the program has completed less than 1 percent of the flight test program and no structural or durability tests have been started. According to the program’s test and evaluation master plan, the key performance parameters will be verified during testing from 2010 to 2013. Table 3 shows the program’s estimate for each key performance parameter. The JSF program’s acquisition strategy includes significant challenges to achieve projected cost and schedule goals. The program has begun procurement but not yet demonstrated that the aircraft design is mature, can be manufactured efficiently, and delivered on time. The flight test program has just begun, and there is always risk of problems surfacing and causing further delays. The degree of concurrency between development and production in the JSF program’s acquisition strategy still includes significant risks for cost and schedule overruns or late delivery of promised capabilities to the warfighter. The program also faces uncertainties with the amount of funding that will be available to support the program’s plan. Other DOD review and oversight organizations have also expressed concern over the level of risk in the program and the resulting costs that will be incurred to complete this acquisition program. The program has planned a 7-year flight test program that includes over 11,000 hours of testing and over 6,000 flights. This is 75 percent more than the F-22A’s flight test program and more than double the F/A-18E/F testing efforts. As of this report, the flight test program was only beginning with essentially all critical flight testing remaining to confirm that the aircraft will indeed deliver the required performance. Figure 4 shows the planned flight tests by major test categories. The JSF variants possess significant similarities—all designed to have low observable airframe characteristics, fly at supersonic speeds, shoot air-to- air missiles, and drop bombs on target—but each variant has unique performance goals to support the services’ different operational concepts and environments. Test officials acknowledge that each variant will require separate flight testing to demonstrate that it will fly as intended. About two-thirds of the flight tests are planned for demonstrating the performance of each aircraft design. The other one-third of the flight tests are expected to confirm shipboard operations, mission systems, survivability, and armament. Manufacturing and technical problems can delay the completion of a flight test program, increase the number of flight test hours needed to verify that the system will work as intended, and affect scheduled delivery to the warfighter. Under the current testing schedule, the JSF program plans to manufacture and deliver 15 flight test aircraft and 7 ground test articles in 5 years—an aggressive schedule when compared with other programs with fewer variables. For example, the F-22A program took almost 8 years to manufacture and deliver nine flight test aircraft and two ground test articles of a single aircraft design. When the B-2 program began flight testing in July 1989, it estimated that the flight test program would last approximately 4.5 years and require about 3,600 flight test hours. When the test program ended in 1997, the flight test hours had grown to 5,000 hours, or by 40 percent, over an 8-year period. Program officials cited several causes, including difficulties in manufacturing test aircraft and correcting deficiencies from problems discovered during testing. The F-22A encountered similar delays increasing a planned 4-year flight test program to about 8 years, affecting the program’s ability to conduct operational testing and move into production on schedule. As discussed earlier, current JSF schedules are already showing that delivery of early test aircraft will be later than the planned delivery date. The flight test program will also hinge on the delivering aircraft with the expected capabilities. JSF’s expected capabilities are largely dependent on software that supports vehicle and mission systems. The program plans to develop over 22 million lines of code—more than 6 times the lines of code needed for the F-22A—in five blocks. The first block is nearly complete and the last block is scheduled for completion in late 2011. The program has completed less than 40 percent of the software needed for the system’s full functionality. Most of the completed software is designed to operate the aircraft’s flying capabilities, while much of the remaining software development includes software needed for mission capability, including weapons integration and the fusion of information from onboard sensors and sources off the aircraft. Past programs have encountered difficulties in developing software, which delayed flight test schedules. JSF program officials acknowledged that the software effort will become particularly challenging during 2007 and 2008 when all five software blocks will be in development at the same time. The concurrency between development and production in DOD’s acquisition strategy for JSF did not substantially change as a result of the program’s rebaseline in fiscal year 2004. Therefore, the program is entering low-rate initial production without demonstrating through flight testing that (1) the aircraft’s flying qualities function within the parameters of the flight envelope—that is, the set limits for altitude, speed, and angles of attack; (2) the aircraft design is reliable; or (3) a fully integrated and capable aircraft system can perform as intended. Starting production before ensuring design maturity through flight testing significantly increases the risk because of the of costly design changes that will push the program over budget and behind schedule. Failure to capture key design knowledge before producing aircraft in quantity can lead to problems that eventually cascade and become magnified through the product development and production phases. Figure 5 is a notional illustration showing the impacts that can result from a highly concurrent acquisition strategy to one with less concurrency and that captures key design and manufacturing data before production begins. While some concurrency may be beneficial to efficiently transition from the development stage of a program to production, the JSF is currently planned to be significantly more concurrent than the F-22A program that failed to deliver the warfighting capability on time and at predicted costs. Table 4 provides a more detailed comparison between the JSF and F-22A development programs and the accomplishments and requirements before starting production in each program. As a result of the risk associated with highly concurrent development and production, the JSF program plans to place initial production orders on cost reimbursement contracts. Cost reimbursement contracts provide for payment of allowable incurred costs, to the extent prescribed in the contract. Such contracts are used when costs cannot be estimated with sufficient accuracy to use any type of fixed price contract. Cost reimbursement contracts place a substantial risk on the buyer—in this case DOD—because the contractor’s responsibility for the cost risks of performance has been minimized or reduced. As knowledge is gained over time, the program office intended to shift the contract type to one where more cost risk is placed on the contractor. However, DOD materials supporting the President’s fiscal year 2008 budget show that all low rate production orders will be placed on cost reimbursement contracts. To execute its current plan, the JSF program must obtain unprecedented levels of annual funding—on average over $12.6 billion annually in acquisition funds over the next 2 decades. Regardless of likely increases in program costs, the sizeable continued investment in JSF—estimated at roughly $252 billion over 20 years—must be viewed within the context of the fiscal imbalance facing the nation within the next 10 years. The JSF program will have to compete with many other large defense programs, such as the Army’s Future Combat System and the Missile Defense Agency’s ballistic missile defense system, for funding during this same time frame. There are also important competing priorities external to DOD’s budget. Fully funding specific programs or activities will undoubtedly create shortfalls in others. Funding challenges will be even greater if the program fails to achieve current cost and schedule estimates for the revised program baseline. The consequences of an even modest cost increase or schedule delay on a program this size is dramatic. For example, since the program rebaseline in fiscal year 2004, the estimated annual funding requirements have increased every year from 2012 to 2027 by at least $1 billion and in some cases by $3 to $7 billion. These funding increases would be enough to fund several major programs’ activities. Figure 6 shows growth in estimated annual funding requirements from December 2003 to December 2005. Due to affordability pressures, DOD is beginning to reduce procurement budgets and annual quantities. The just-released fiscal year 2008 defense budget shows declining procurement quantities for the first years of production. To meet future constrained acquisition budgets, Air Force and Navy officials and planning documents suggest a decrease in maximum annual buy quantities from 160 shown in the current program of record to about 115 per year, a 28 percent decrease. While this will reduce annual funding requirements, it will also stretch the procurement program at least 7 years to 2034, assuming buy quantities are deferred rather than eliminated. DOD’s military service operational test organizations, the Cost Analysis and Improvement Group (CAIG), and the Defense Contract Management Agency (DCMA) have expressed concerns over the level of risk and estimated costs of the program. These oversight and testing organizations highlight some of the program risks and the challenges the JSF program must overcome to avoid further slips in schedule and more cost growth. A February 2006 operational assessment of the JSF program by Air Force, Navy and United Kingdom operational test officials noted several areas of risk. According to the test report, several of these issues, if not adequately addressed, are likely to pose substantial or severe operational impact to the JSF’s mission capabilities. Key concerns raised in the report include the following: Software development and testing schedules are success-oriented and have little margin to accommodate delays. Developmental flight test schedule provides little capability to respond to unforeseen problems and still meet scheduled start of operational testing. This threatens to slip operational testing and initial operational capability. Predicted maintenance times for propulsion system support, integrated combat turn, and gun removal and installation do not meet requirements. Design requirements to preserve volume, power, and cooling for future growth are in jeopardy and will limit capability to meet future requirements. Certain technical challenges in the aircraft or its subsystem design that could impact operational capability. In a follow-up discussion on the report, test officials stated that these concerns were still current and they had not been informed by the program office of planned actions to address them. The December 2006 Annual Report of DOD’s Director, Operational Test and Evaluation recommended that the JSF program follow up on these issues. The CAIG has expressed concerns about the reality of estimated program costs. Its preliminary cost estimate in 2005 was substantially higher than the program office estimate. The CAIG cited costs associated with mission systems, system test, engines, and commonality as drivers in the difference between its estimate and that of the program office. According to discussions in 2006 with CAIG officials, they still have concerns and continue to expect program costs to be much higher than the program office’s current estimate. The CAIG is not required to submit its next formal independent cost estimate until the preparations for Milestone C, which for the JSF program is full-rate production. For major defense acquisition programs, this milestone generally should occur before low- rate initial production. Milestone C is scheduled for late 2013. DCMA’s concerns focus on the prime contractor’s ability to achieve its cost and schedule estimates. DCMA, responsible for monitoring the prime contractor’s development and procurement activities, found that delays in aircraft deliveries and critical technical review milestones put at risk the contractor’s ability to meet the current schedule. DCMA also identified manufacturing operations as a high-risk area highlighting issues with parts delivery, raw material availability, and subcontractor performance. Finally, it raised concerns with contractor cost growth stating that the contractor has shown continuing and steady increases since development started, even after the contract’s target price was increased by $6 billion as part of the program’s rebaseline. As of November 2006, DCMA projects that the contractor’s current estimated development costs will increase by about $1 billion. The JSF is entering its 6th year of a 12-year development program and is also entering production. The development team has achieved first flight and has overcome major design problems found earlier in development. In addition, the department counts on this aircraft to bear the brunt of its recapitalization plans. Therefore, we believe the program is critical to the department’s future plans and is viable, given progress made to date. However, the current acquisition strategy still reflects very significant risk that both development and procurement costs will increase and aircraft will take longer to deliver to the warfighter than currently planned. Even as the JSF program enters the mid point of its development, it continues to encounter significant cost overruns and schedule delays because the program has continued to move forward into procurement before it has knowledge that the aircraft’s design and manufacturing processes are stable. Although some of the additional costs were predictable, other costs, especially those resulting from rework, represent waste the Department can ill afford. Flight testing began just a few months before the decision to begin low- rate initial production. The challenges and risks facing the program are only expected to increase as the program begins to ramp up its production capabilities while completing design integration, software design, and testing. DOD’s approval to enter low-rate initial production this year committed the program to this high risk strategy. If the program is unable to mitigate risks, its only options will be to reduce program requirements or delay when the program achieves initial operational capability. We see two ways this risk can be reduced: (1) reducing the number of aircraft for procurement before testing demonstrates their performance capabilities, thereby reducing the potential for costly changes to the aircraft and manufacturing processes or (2) reexamining the required capabilities for initial variants with an eye toward bringing them up to higher capability in the future. Last year Congress reduced funding for the first two low-rate production lots of aircraft thereby slowing the ramp up of production. This was a positive first step in lowering risk during the early years of testing. However, a significant amount of ground and flight tests remains over the next 6 years. All three variants need to demonstrate their flight performance. The carrier variant will be the last of the three variants to be delivered to the flight test program. It is now scheduled to start flight testing in May 2009 and has nearly 900 flight tests planned to demonstrate its flight performance. If the program executes its plan for a steep ramp up in production before proving the basic flying qualities of each aircraft variant, the likelihood of costly changes to its significant investment in production will remain high. To improve chances of a successful outcome, we are recommending that the Secretary of Defense limit annual low-rate initial production quantities to no more than 24 aircraft per year, the current manufacturing capacity, until each variant’s basic flying qualities have been demonstrated in flight testing now scheduled in the 2010 time frame. DOD provided us with written comments on a draft of this report. The comments appear in appendix II. DOD non-concurred with our recommendation stating that the current JSF acquisition strategy provides an effective balance of technical risk, financial constraints, and operational needs of the services. However, we believe DOD’s actions to reduce aircraft quantities in the fiscal year 2008 President’s Budget are in line with our recommendation to limit production to current manufacturing capacity until each variant’s flying qualities have been demonstrated in flight testing. In the 2008 budget, DOD reduced the number of production aircraft it plans to buy during the flight test program by about 35 percent as compared to its previous plan for the JSF. Under this new plan DOD does not substantially increase its buy quantities of production aircraft until 2011. We continue to believe that limiting production quantities until the design is demonstrated would reduce the overlap in production and development while still allowing the efficient transition from development to production. It would also make cost and schedule more predictable and lessen the risk to DOD’s production investment. The JSF program is still only in its sixth year of a 12-year development program with significant challenges remaining such as completing the design, software development, and flight testing. As such, there is continued risk that testing will not go as planned and demonstrating the aircraft’s capability could be delayed beyond the current plan. Therefore, we maintain our recommendation and will continue to monitor the progress in the test program and the resulting dynamics between development and production. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. We will also provide copies to others on request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff making key contributions to this report were Michael Hazard, Assistant Director; Lily Chin; Matthew Lea; Gary Middleton; Daniel Novillo; Karen Sloan; Brian Smith, Adam Vodraska; and Joe Zamoyta. To determine the status of the Joint Strike Fighter (JSF) program’s cost, schedule, and performance, we compared current program estimates against estimates established after the program rebaselined in fiscal year 2004. Current official program cost estimates are based on the program’s December 31, 2005, Selected Acquisition Report to Congress. At the time of our review, the Office of the Secretary of Defense was still preparing its new cost estimate to be included in the program’s Selected Acquisition Report dated December 31, 2006, expected to be delivered to the Congress in April 2007. Because the new official cost estimate for the JSF program will not be available until after this report is issued we are unable to make informed judgments on those estimated costs. It should be noted that after our 2006 report was issued on March 15, 2006, DOD released its December 2005 Selected Acquisition Report, which showed an increase of over $19 billion in total estimated JSF program costs. We identified changes in the program’s cost, schedule, and performance since the program rebaseline and analyzed relevant information to determine the primary causes of those changes. We reviewed JSF management reports, acquisition plans, test plans, risk assessments, cost reports, independent program assessments, and program status briefings. We interviewed officials from the DOD acquisition program management office and prime contractor to gain their perspectives on the performance of the program. To identify the challenges the program will face in the future, we compared the programs plans and results to date with future plans to complete development. We analyzed design and manufacturing data from the program office and the prime contractor to evaluate performance and trends. We reviewed program risk reports, earned value management data, and manufacturing data to identify uncertainties and risks to completing the program within the new targets established by the program rebaseline. We analyzed test program and software data to understand the readiness and availability of development aircraft for the test program. We also obtained information on past DOD programs from Selected Acquisition Reports and prior work conducted by GAO over the past two decades. We interviewed officials and reviewed reports from several DOD independent oversight organizations to gain their perspectives on risk in the program. To assess the likely impacts of concurrently developing and manufacturing JSF aircraft we compared the program’s plans and results to date against best practice standards for applying knowledge to support major program investment decisions. The best practice standards are based on a GAO body of work that encompasses 10 years and visits to over 25 major commercial companies. Our work has shown that valuable lessons can be learned from the commercial sector and can be applied to the development of weapons systems. We identified gaps in product knowledge at the production decision, reasons for these gaps, and the risks to the program. We also examined the F-22A program’s acquisition approach. We interviewed officials from the DOD acquisition program management office and prime contractor to gain their perspectives on program risks and their approaches to managing risks. In performing our work, we obtained information and interviewed officials from the JSF Joint Program Office, Arlington, Virginia; F-22A Program Office, Wright-Patterson Air Force Base, Ohio; Lockheed Martin Aeronautics, Fort Worth, Texas; Defense Contract Management Agency, Fort Worth, Texas; and offices of the Director, Operational Test and Evaluation, and Acquisition, Technology and Logistics, Program Analysis and Evaluation-Cost Analysis Improvement Group, which are part of the Office of Secretary of Defense in Washington, D.C. We performed our work from June of 2006 to March of 2007 in accordance with generally accepted government auditing standards.
The Joint Strike Fighter (JSF) program--a multinational acquisition program for the Air Force, Navy, Marine Corps, and eight cooperative international partners--is the Department of Defense's (DOD) most expensive aircraft acquisition program. DOD currently estimates it will spend $623 billion to develop, procure, and operate and support the JSF fleet. The JSF aircraft, which includes a variant design for each of the services, represents 90 percent of the remaining planned investment for DOD's major tactical aircraft programs. In fiscal year 2004, the JSF program was rebaselined to address technical challenges, cost increases, and schedule overruns. This report--the third mandated by Congress--describes the program's progress in meeting cost, schedule, and performance goals since rebaselining and identifies various challenges the program will likely face in meeting these goals in the future. The JSF program has delivered and flown the first development aircraft. However, cost and schedule goals established in the fiscal year 2004 rebaselined program have not been met. Total JSF program acquisition costs (through 2027) have increased by $31.6 billion and now DOD will pay 12 percent more per aircraft than expected in 2004. The program has also experienced delays in several key events, including the start of the flight test program, delivery of the first production representative development aircraft, and testing of critical missions systems. Delays in the delivery of initial development aircraft were driven by incomplete engineering drawings, changes in design, manufacturing inefficiencies, and parts shortages. Despite these delays, the program still plans to complete development in 2013, compressing the amount of time available for flight testing and development activities. Also, the program projects it will meet all but one key performance requirement--line of sight communications--that is currently dependent on other capabilities being developed outside the JSF program. Accurately predicting JSF costs and schedule and ensuring sufficient funding will likely be key challenges facing the program in the future. JSF continues to pursue a risky acquisition strategy that concurrently develops and produces aircraft. While some concurrency may be beneficial to efficiently transition from development to production, the degree of overlap is significant on this program. Any changes in design and manufacturing that require modifications to delivered aircraft or to tooling and manufacturing processes would result in increased costs and delays in getting capabilities to the warfighter. Low-rate initial production will begin this year with almost the entire 7-year flight test program remaining to confirm the aircraft design. Confidence that investment decisions will deliver expected capability within cost and schedule goals increases as testing proves the JSF will work as expected. The JSF program also faces funding uncertainties as it will demand unprecedented funding over the next 2 decades--more than $12.6 billion a year on average through 2027.
To determine the soundness of IRS’s PDC study as primary support for IRS’s decision to discontinue contracting out tax debt collection, we reviewed the study report and supporting documents and other data. We interviewed IRS officials and contractors involved in the study. We also reviewed the report IRS commissioned to validate the PDC study conclusion and interviewed contractor staff involved in that effort. We also interviewed officials from one of the PCA firms IRS contracted with to obtain their views on the PDC study. We reviewed and summarized program analysis guidance from various sources in developing our criteria. Such guidance came from Office of Management and Budget (OMB) Circular A-94, a previous GAO publication on evaluating federal programs, an academic research paper on costs and benefits that should be considered in making decisions on resources for tax enforcement programs, and accepted quantitative analysis criteria on sampling cases and projecting results. We compared IRS’s study methodology and report to the criteria from the various guidance. We also reviewed IRS’s Internal Revenue Manual (IRM) and interviewed officials to determine if IRS had guidance on whether and how to conduct and document economic analyses to support decisions to initiate, renew, or expand programs. We also reviewed the types of costs and benefits that IRS included in the PDC study. To determine what changes IRS has planned or made to its collection approach based on its PCA experience and the PDC study, we reviewed program documents and interviewed IRS officials on their processes and procedures for collecting tax debt. We also compared IRS’s plans for studying whether to work on PCA-type cases in the future to guidance in our previous publication on evaluating federal programs and in our report reviewing IRS’s study of the earned income tax credit. We conducted this performance audit from November 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In developing the PDC program, IRS officials said that contracting with PCAs was needed because Congress was unlikely to provide IRS sufficient staff to attempt collection on the inventory of cases with a lower priority. Also, officials said that PCAs would pursue cases that IRS staff would not because IRS had other higher-priority collection cases. As IRS planned the PDC program, we issued two reports with findings and recommendations to improve the program. In May 2004, we reported that IRS’s PDC study approach—comparing PCA and IRS performance for the same type of simpler cases that would be sent to PCAs—would provide limited information to judge whether using PCAs is the best use of resources. In sum, the approach conflicted with IRS officials’ position that these simpler cases would not be worked by IRS employees given the higher-priority cases in IRS’s workload. We recommended instead that IRS compare the use of PCAs to a collection strategy that officials determine to be the most effective and efficient overall way of achieving collection goals. In September 2006, we reported that IRS’s planned PDC study approach would not meet the intent of our 2004 recommendation. Because the study would not count the fees paid to PCAs as program costs, it would not compare the results of using PCAs with the results IRS could get if it was given the same amount of resources, including the fees to be paid to the PCAs. We recommended instead that IRS ensure that the study methodology and the reports on the study include the full costs of the PDC program, including the fees paid to PCAs and the best use of those federal funds. IRS agreed with the recommendations in both reports, and IRS’s actions to implement them generally are the topic of this report. IRS’s PDC study did not meet the intent of our 2004 recommendation because it did not compare using PCAs to what IRS officials determined to be the most efficient and effective overall strategy. Although IRS met part of our 2006 recommendation in that the PDC study included fees paid to PCAs as program costs, the PDC study did not fully implement our recommendation to include the best use of federal funds because, as discussed further below, the study had methodological errors and a narrow scope. IRS’s PDC study report results were released on March 5, 2009, along with IRS’s announcement that it was ending the program. The study tracked selected cases assigned to IRS’s Automated Collection System (ACS) versus PCAs and measured them in terms of cost per dollar collected, percentage of balance due collected, and percentage of cases in payment status at the end of the study period. For both PCAs and IRS, the PDC study counted the dollars collected from full payments or estimated dollars collected through installment agreements. The study results, summarized in table 1, showed that IRS performed better in each of the measures. On February 3, 2009, legislation was introduced in the House to amend the Internal Revenue Code of 1986 to repeal the authority of the Secretary of the Treasury to enter into PDC contracts. The last major action on the legislation was on February 3, 2009, when it was referred to the House Committee on Ways and Means. The Omnibus Appropriations Act of 2009, enacted March 11, 2009, prevented IRS from using fiscal year 2009 appropriated funds “to enter into, renew, extend, administer, implement, enforce, or provide oversight of any qualified tax collection contract (as defined in section 6306 of the Internal Revenue Code of 1986).” Absent the access to appropriated funds, IRS funded the administrative costs of the PDC program through its user fees until IRS was able to end all PDC activities. The Consolidated Appropriations Act of 2010 placed the same prohibition on IRS’s use of fiscal year 2010 funds. According to IRS officials, the PDC study was not originally intended or designed to be primary support for IRS’s PDC program decision. Even though other factors, such as potential increases in IRS collection staffing, were considered, based on our interviews with IRS officials and IRS’s announcement of the program’s termination, the study results played a primary role in supporting the decision. IRS officials noted that the difference in cost per dollar collected using IRS staff and PCAs was so pronounced that in their view, additional analyses would have been unlikely to change the decision that was made. Nevertheless, neither we nor IRS officials know whether the PDC study results and decision on the program would have differed significantly if the study had been designed to be primary support for IRS’s PDC program. However, errors in the study sampling methodology and the study’s narrow scope limit its usefulness in supporting the PDC decision. IRS does not have guidance on whether and how to conduct and document economic analyses to support decisions to initiate, renew, or expand programs. The IRM, which includes authoritative guidance for IRS managers, does not include guidance on economic analyses that should support decisions to initiate, renew, or expand a program. Because IRS did not perform certain analyses and documentation is not available to do those analyses, it is unclear whether the study’s results are accurate. Guidance for statistical surveys requires that a study’s sampling procedures be sufficiently described such that policymakers can assess whether the study’s results can be generalized to the settings and times of interest. The PDC study documentation mentioned a design that could represent the full population of PCA-type cases based on sample cases drawn from various groupings of PCA-type cases. However, IRS did not generalize the study results back to the full PCA case population, limiting its analysis to the sample cases. It is common for study results to differ significantly between results for the sampled cases and results when generalized back to the full population. IRS did not retain sufficient documentation on the sample selection and analysis to enable it or others at this point to calculate estimates for the population, measure the margin of error, or otherwise ensure the soundness of the sample approach. Even though IRS documentation referred to a sampling approach, IRS officials said the study was not intended to provide generalizable results. The sample cases assigned to the PCAs and IRS for comparative purposes may or may not have been similar. IRS was concerned about potential differences between the groups and tested the difference in the average balance due amount between the cases to be assigned to PCAs and IRS. The test did not find a difference. However, this was a limited test of only one variable. IRS could have tested other variables, such as taxpayer filing history, adjusted gross income, filing status, geographic region, and type of forms completed by the taxpayer. Differences in any of these variables could affect taxpayer compliance and payment. Performance differences between the PCAs and IRS could be due to differences in the composition of the groups. Because the PDC study had a narrow objective of comparing the results of collection efforts by IRS and PCAs for the PCA-type cases, its design did not consider other factors included in federal and other guidance on conducting program analyses. An objective more directly focused on providing information to serve as primary support for IRS’s decision on the PDC program could have led IRS to conduct additional analyses relevant to the decision. The findings from those analyses could have affected the program decision. Guidance on conducting program analysis to determine whether to continue a program states that the rationale for the government program being examined should be clearly stated in an analysis. Clearly stating the rationale for the program can help ensure that the analysis includes relevant factors and outcome measures, particularly if facts related to the rationale have changed and therefore need to be included in the analysis. Stating the rationale of the program and conducting related analyses can help ensure that variables measured in the study answer the questions decision makers have about the basic reasons the program exists, the continuing need for it, and the costs and benefits of continuing, revising, or ending it. For instance, one rationale for starting the program was that Congress was unlikely to provide funds for staff to work these cases. The PDC study did not cover whether this had changed and, if so, whether IRS would use any portion of additional staffing for PCA-type cases. As discussed later, the number of staff who would normally work PCA-type cases has increased in recent years. To inform a decision on whether to continue a program and ensure the best use of federal resources, government officials need information on the range of alternatives for achieving a program’s objectives at the lowest costs. Guidance states that program analyses should include alternative means of achieving program objectives by examining different program scales, methods of provision, and degrees of government involvement. The PDC study included three alternatives: (1) PCAs attempting to collect unpaid tax debts for the inventory assigned to them, (2) IRS taking collection action on the types of cases that were assigned to PCAs, and (3) IRS taking collection action on another type of inventory normally not worked. However, beyond not addressing whether to continue the PDC program, the study did not analyze alternatives for program scale, such as expanding the PDC program or scaling it back to a segment of cases that might be more cost effective for PCAs to work than IRS. For example, in deciding whether to scale back rather than eliminate the PDC program, IRS could have analyzed the types of cases, if any, where PDC performance was better than IRS performance. Assuming that this analysis pointed to favorable types of cases for PCAs, IRS then could have limited the program to working those cases, unless the benefit of doing this was so limited that running a separate PDC program with fixed costs would not be cost beneficial. According to IRS officials, IRS had effectively scaled back the program during its course because after the initial wave of cases assigned to PCAs, IRS had difficulties in obtaining a sufficient volume of cases appropriate for PCAs to work. In addition, IRS did not compare the PDC program to what it judged to be the best overall strategy for improving tax collections, as we had recommended IRS do in our 2004 report, even though the Commissioner of Internal Revenue agreed with our recommendation. In 2004, we concluded that IRS should do a study in line with federal guidance, such as comparing the results of using PCAs to the results from using the same amount of funds to be paid to PCAs in an unconstrained manner that IRS determined to be the most effective overall way of achieving its collection goals. After our report, Congress authorized the PDC program and included the provision making 25 percent of PDC collections available to IRS. Conceptually, Congress effectively made available funds for the PDC program totaling the amounts paid to PCAs plus the up to 25 percent of PDC collections that IRS could use for enforcement purposes. We continue to believe that a comparison of results of the PDC program—as authorized by Congress—to results IRS would achieve if given the same funds to use in what it judged to be the best possible manner would have better supported a decision on the PDC program. To inform a decision on whether to continue a program, government officials need complete and reliable information on all the program’s benefits and costs. Guidance for doing economic analyses states that to the extent possible all benefits and costs should be monetized to provide a standard unit of comparison. If it is not feasible to assign monetary values, other quantification of costs and benefits should be done. If quantification is not possible, at a minimum, analyses should include a comprehensive listing of the different types of benefits and costs to identify the full range of program effects. Furthermore, analyses should be explicit about the underlying assumptions used to estimate future benefits and costs. The PDC study measured the government’s cost per dollar of direct tax revenue collected, percentage of balance due collected, and percentage of cases in payment status at the end of the study period. As cited in IRS’s announcement of the decision to not renew PCAs’ contracts, the study showed that when working the same types of cases as PCAs, IRS had better results than PCAs. Specifically, IRS’s cost was $0.07 to collect a dollar itself while the government’s cost using PCAs was $0.24 for each dollar collected. According to IRS officials, IRS’s results on the cases it worked were comparable to or better than the results of working its normal cases. In addition to direct revenues, the study did not estimate or otherwise discuss indirect revenue (for example, determine whether other taxpayers are more or less likely to pay their due taxes when IRS works the cases than when PCAs do). Indirect revenues are difficult to estimate, but the study could have explored whether any logical reasons exist to indicate that indirect revenue would have varied based on which party worked the PCA-type cases. As discussed further below, the PDC study also did not include potentially important program costs and benefits for a tax enforcement program, such as costs to taxpayers to comply with requirements to pay their delinquent taxes (are costs to taxpayers lower under IRS or PCAs); equity (is IRS or are PCAs more effective in collecting taxes from taxpayers in different income groups, for example); and economic efficiency (is IRS or are PCAs more effective at working cases involving taxpayers in different industries, for example, resulting in more or less distortion of activity across types of businesses). IRS officials said that these benefits and costs were not addressed in the study because they are difficult to measure. However, IRS could have followed OMB guidance to, at a minimum, list and discuss these omitted costs and benefits in the final report. As a result, the decision to end the program rests, in part, on the assumption that important differences do not exist between IRS and PCA handling of these cases related to these costs and benefits. If IRS assumed that no such differences existed, this should have been stated in the report with the supporting rationale. Some important differences might exist. For example, it is not clear that taxpayer costs would be the same regardless of which party worked on collecting their tax debt. Taxpayer compliance costs could be lower when IRS collects taxes because 90 percent of the cases IRS worked were eligible for systemic actions, such as systemic levies of taxpayer’s assets. Such actions might incur little cost for the taxpayer beyond reading IRS’s notification of intent to levy assets, as compared to a PCA case, which required that a taxpayer first make a payment or answer or return a phone call in response to a PCA’s letter or call. Policymakers do not know whether costs to taxpayers differed between IRS and PCA collections because IRS did not consider this and other circumstances that could affect taxpayers’ costs. The study also did not include a type of collected revenue—which IRS called noncommissionable revenue—that IRS officials tracked in a measure to assess the PDC program’s performance and, in some cases, might have gone uncollected otherwise. For example, IRS did not pay PCAs a commission when debt was collected within a 10-day window after being assigned to a PCA. In establishing the program, an IRS official told us that IRS expected that many dollars would be collected within this 10- day window because taxpayers would send in payment after receiving notification that their debts were being assigned to a PCA. To the extent this occurred, not taking these revenues into account in the study may have led to an underestimate of the collections attributable to the PDC program. Other noncommissionable revenue included collections through actions IRS systemically took regardless of whether a case was assigned to a PCA, such as refund offsets. According to IRS officials, such revenues were not included in the PDC study for either PCAs or IRS collections. IRS data through fiscal year 2007 show that total collections for the PDC program were about $32.1 million with $7 million (22 percent) of the total being noncommissionable; for fiscal year 2008, the total was $37.3 million with $9.6 million (26 percent) noncommissionable; and for fiscal year 2009, the total was $28.8 million with $11.5 million (40 percent) noncommissionable. IRS’s PDC study focused on just a few types of government costs. Those costs included IRS’s costs as well as the commissions paid to PCAs. Table 2 identifies these types of costs. IRS used a variable costing methodology rather than a full costing approach to compare the costs of PCAs and IRS working inventory that they determined to be similar. According to IRS, the variable costing approach included expenses that would vary depending on an increase or decrease in the inventory assigned to either the PCAs or IRS. The study did not include previous costs, such as costs in setting up the PDC program. In general, this overall approach to including costs is consistent with OMB guidance. That guidance says that analyses should be based on incremental costs and benefits and sunk costs should be ignored. IRS did not include management oversight or information technology since such costs would not vary with the volume of cases handled. For example, IRS did not include the costs of a PDC program oversight unit that among other things was responsible for monitoring PCA performance. Although these costs likely would not vary greatly with changes in the volume of cases handled by IRS or PCAs, if the absolute costs were significantly greater either for IRS or PCAs, the difference may have affected the cost comparison, especially over relatively small numbers of cases. IRS did not document whether these costs varied significantly based on who handled the cases. To determine the cost for the PCAs, IRS used historical cost data of the PDC program for fiscal year 2008 that were generated from IRS’s Integrated Financial System. IRS apportioned those costs to the number of cases in its study. Unlike the PCAs cost data, the IRS costs were not system generated but were calculated based on estimates and assumptions. For example, IRS identified the transactions (such as phone calls made or notices sent to taxpayers) taken on each case in the sample by manually reviewing individual data posted in the taxpayers’ case files in the Integrated Data Retrieval System. To then calculate direct labor hours per case, IRS used information from inventory handling time reports and call handling time reports. IRS determined an average minutes per transaction, which it applied to the sampled cases in the study. Because the PCA-type cases were at least somewhat different than cases ACS normally handled, applying the average minutes from typical transactions may or may not have accurately reflected the time actually taken on the PCA-type cases. Other estimated costs were for telephone calls and postage costs for mailing a single installment agreement notice to taxpayers. Another fundamental program analysis principle is to estimate expected results of the program. Because decisions about the future of a program depend on the expected future results of it or any alternatives that are under consideration, future results are to be included on a discounted basis in either benefit-cost analysis (the value of expected net benefits, i.e., benefits minus costs) or cost-effectiveness analysis (to determine which alternative has the lowest costs for a given amount of benefits or which has the greatest benefits for a given amount of costs). Past performance can be relevant in helping to estimate the value of future results. IRS’s PDC study included no estimates of future costs and revenues for the alternatives studied. The study presented data and related ratios, including costs per dollar collected, only for cases worked in the past. Although the study text states that the cases studied were like those PCAs worked during the program and would work in the future, it provided no analytical basis for this assumption. Documenting any analyses of future costs and revenues was especially relevant for any decision on the future of the PDC program because IRS had previously revised the criteria for selecting PDC cases several times in order to provide the volume of work to PCAs that had been anticipated. Accordingly, reasonable questions existed about whether IRS would be able to continue providing a sufficient stream of work to PCAs and how that might affect the scope of the program and the nature of the cases PCAs would work. Absent any such documented analysis, decision makers and those overseeing the agency had a limited basis on which to assess whether PCAs likely would have been as effective at working the cases IRS could deliver in the future as they had been in working past cases. IRS officials said that they considered future results, but only outside the study, in deciding to end the PDC program. IRS officials said that during management meetings on deciding whether to continue the PDC program, they considered past program performance compared to expectations and how that might affect future performance. For example, they said that they considered gaps between IRS’s original expectations, what the program actually realized, and what the program would likely achieve in the future given actual program experience. Among other things, IRS officials noted the declining available case inventory levels for the program. They said that the program’s actual performance compared to expectations was a reason that IRS ended the program. IRS was unable to provide documentation of any analyses of future expected costs and results. Without such analyses, it is unclear to what extent the program did not meet expectations, whether IRS determined the underlying reason, and whether the program’s future performance could have been improved in a manner that could have affected the decision on the program. In authorizing the PDC program in 2004, Congress required IRS to create a measurement plan to capture information on successful collection techniques used by the contractors that IRS could adopt. The PCAs’ best practices were to be compared with IRS’s collection practices. IRS was to report on this measurement plan and its results in a mandated biennial report that was to include specific types of information. In an unpublished draft biennial report, IRS said it reviewed PCA best practices and concluded that none of them were sufficiently better than IRS’s practice to merit adoption. IRS officials provided us a draft version of the biennial report for 2007. IRS neither finalized the report nor released it. IRS officials pointed to significant transition in the PDC office during this time, as well as transitions in the Deputy Commissioner’s and the Commissioner’s offices. This draft report described IRS’s steps to identify lessons learned from PCAs. It said that IRS did not find any immediate opportunities to adopt PCA practices, but provided no details beyond this sentence. The draft report also said IRS would continue to try to identify lessons learned. IRS officials said that IRS had not changed its criteria to start regularly selecting PCA-type cases to work because the PDC study results were not sufficient to identify which of the PCA-type cases could be productively worked. However, IRS officials said that they were surprised by the study results, which indicated that IRS staff might have better results working these cases than some of the cases IRS normally works. IRS officials said that the types of cases sent to PCAs previously had been considered low priority because of low potential collection return. In establishing the PDC program, IRS officials indicated that the PCA contractors would be working inactive collection cases that IRS collection staff would not be working. For example, in May 2007, after IRS had sent the initial inventory of cases to PCAs to work, the Acting Commissioner of Internal Revenue testified that if the money invested in the PDC program was not used for it, IRS would not use the funds to work cases that would have been assigned to PCAs. Rather, the funds would be used to work other cases considered higher priority in the tax debt inventory. Because of the surprising result, IRS began a pilot study of the types of cases that had been worked by PCAs. According to IRS officials, the pilot study’s goals were to (1) provide coverage for a segment of the unpaid tax debt inventory that would not be worked because of the end of the PCA contract and (2) provide information on which types of cases IRS could fruitfully work in the future. IRS placed approximately 19,000 pilot cases with ACS over a 4-month period beginning in September 2009. IRS officials said that the agency randomly drew the cases from the database of cases that IRS had used to assign cases to the PCA staff. IRS employees cannot identify which of their assignments are PCA-type cases. As of July 2010, the status of these cases was as follows: 8,559 cases (45 percent) had been closed, for example, through some form of payment or other action; 4,954 cases (26 percent) were being worked; and 5,552 cases (29 percent) had yet to be worked. As of June 2010, IRS officials said that they expected that the work on these pilot study cases would be finished by December 2010. For these cases, IRS collected data on its activities (such as liens and levies) and the results (such as dispositions and dollars collected). Of the 8,559 cases closed as of July 1, 2010, IRS had received payment in full in 931 cases (11 percent) and had entered into installment agreements in 2,878 cases (34 percent). IRS officials said they had planned to use the collected data to determine if changes should be made to the case selection criteria to assign certain types of PCA-type cases to the IRS active collection inventory. According to IRS officials, after the pilot cases were worked, IRS’s Office of Program Evaluation and Risk Analysis (OPERA) was to study the collected data to see what happened with the cases. IRS expected that the OPERA study would take a year to complete (i.e., until about December 2011). IRS officials said that the agency was then to determine whether more PCA- type cases should be routinely worked in IRS and, if so, which ones. Following guidance on conducting program analyses helps ensure that relevant information is considered and a sound methodology is used, including stated study objectives and a clear and appropriate study design for answering the objectives. As we have reported in the past, these principles are essential when developing an analysis or evaluation plan. For example, this plan should not only be written but have sufficient detail to answer questions such as the following: Are the test goals and objectives of IRS’s pilot study clearly stated? Has IRS identified the types of data to be collected? What specific types of analysis will be performed? The documentation IRS had available describing the pilot study—a draft plan prepared in December 2009—addressed summarizing and providing data on the sampled cases (such as collection action taken, e.g., liens and levies) and the results (such as the disposition or current status of the cases and the dollars collected). But as of June 24, 2010, IRS had not produced an approved plan on how it intended to analyze the pilot study data even though it had worked about three-fourths of the cases selected for the pilot. According to IRS officials, progress in developing an analysis plan for the pilot study had been delayed by efforts to implement Consolidated Decision Analytics (CDA). Among other things, CDA is to implement new models to better predict the collection potential of unpaid tax debt cases to help ensure the best use of IRS’s collection resources. Officials said that any plan for analyzing the pilot study’s data would need to ensure that the pilot study results will be useful given IRS’s plans to more fully implement CDA in January 2011. Beyond not knowing how the pilot study data will be analyzed, IRS had not clarified its criteria on how it would use the results of any analyses to make a decision about assigning more PCA-type cases to be worked in IRS. Specifically, IRS had no documentation on the criteria that officials would use in making this decision and what factors, if any, beyond the analyses of the pilot study data would contribute to those criteria or that decision. Developing criteria would be important to ensure that the variables measured in the study would be useful in supporting the decision on whether to change case selection criteria to regularly pursue such cases. As we concluded our review, the status of the pilot study and whether or how PCA-type cases would be included in active collection case inventory became less clear. IRS provided conflicting information about determining whether PCA-type cases have sufficient collection potential to be included in its collection inventory. On one hand, officials said that CDA models were built by tracking the characteristics and collection potential of actual collection cases. On the other hand, PCA-type cases generally have not been worked by collection staff, which is why IRS began its pilot study of PCA-type cases to determine their collection potential. In its comments on our draft report, IRS said that the CDA models that would be implemented in January 2011 had overtaken the need to complete the pilot study. Examining the CDA models was outside the scope of our review. Therefore, we had not reviewed documentation on what types of data were used in developing the CDA models. However, in response to our direct question about whether PCA-type case results were used in developing the CDA models to be implemented in January 2011, an IRS official said that they had not been used. IRS’s March 2009 announcement ending the PDC program said that IRS anticipated hiring more collection staff in fiscal year 2009 and referred to support from the administration and Congress for increased IRS enforcement resources. According to IRS officials, IRS’s ACS staff could work on the PCA-type cases. Based on IRS data, ACS staffing levels—as measured by full-time equivalent (FTE) positions—have increased since fiscal year 2008. The ACS staffing levels increased by 308 FTEs from fiscal years 2008 to 2009 (i.e., 3,395 to 3,703) and were to increase by 94 FTEs during fiscal year 2010 (i.e., to 3,797). These staffing levels account for hiring done to offset attrition as well as additional hiring beyond attrition. The PDC program announcement was unclear on whether additional staff to be hired would be used for PCA-type cases. On the one hand, the announcement said that “IRS determined the work is best done by IRS employees,” which appears to refer to PCA-type work. On the other hand, the announcement also said that “new employees would give the IRS the flexibility to make assignments based on the areas of greatest need rather than filtering which cases can be worked using contractor resources.” IRS officials told us that the announcement did not imply that the collection staff to be hired in fiscal year 2009 would be used to work PCA-type cases. In comparing IRS and PCAs working the same types of cases in the PDC study, IRS was responsive to concerns of some in Congress, the National Taxpayer Advocate, and others on the study’s comparison. Although a study that adhered to federal guidance on analyzing programs would have better informed IRS’s decision on the fate of the PDC program, it is not possible to know whether such a study would have had materially different results or changed IRS’s decision on the program. However it would have provided more complete information for policymakers to consider. To the extent possible, and especially for significant program decisions, IRS should adhere to OMB and other guidance for economic analyses to better ensure that policymakers have adequate information to support their decisions. IRS does not have guidance for managers on the types of analyses that should be done and documented to support program decisions. Such analyses can yield significant benefits by helping inform decision making, but they also incur costs. Therefore, careful consideration of the potential risks and positive impacts of various study designs is necessary to select an appropriate study design and scope to answer the relevant questions in a methodologically sufficient manner. IRS’s PDC study suggested that at least certain PCA-type cases, which IRS had not been working, may be worth including in ACS’s inventory for collection action. To the extent that these results are valid and reliable, IRS may be able to make a relatively low-cost investment in certain PCA- type cases to collect tax debts. However, as we concluded our audit, IRS provided conflicting information about determining whether PCA-type cases have sufficient collection potential to be included in its collection inventory. Given the conflicting information available to us, we believe it is important that PCA-type case results are considered and incorporated as appropriate into the CDA models. If IRS determines that completing the pilot study is the best method to do so, a documented methodology and criteria for the study’s analysis could help IRS make a better decision on which PCA-type cases, if any, should be added routinely to active collection status. We recommend that the Commissioner of Internal Revenue take the following three actions: Establish guidance on the types of analyses that should be done to support decisions to initiate, renew, or expand programs. The guidance might refer to OMB Circular A-94 and, if needed, provide any supplementation specific to IRS. Establish a policy requiring documentation for the design, analyses, and conclusions of studies supporting program changes. Ensure that PCA-type case results are considered and incorporated as appropriate into the CDA model. If IRS determines completing the pilot study is the best means to ensure that PCA-type case results are considered for the CDA models, the Commissioner should ensure that the pilot study has a documented methodology and criteria to guide IRS’s analysis and decision. The IRS Deputy Commissioner for Services and Enforcement provided written comments on a draft of this report in a September 8, 2010, letter, which is reprinted in appendix I. IRS staff also provided technical comments, which we incorporated into the report as appropriate. IRS disagreed with our finding that its PDC study was not soundly designed to support its decision on whether to continue contracting out debt collection. IRS said the study’s comparison of the cost-effectiveness of PCAs and IRS working similar cases provided meaningful data that aided its decision making. IRS cited an independent review of the PDC study that found the results to be reasonable, even though the study had limitations and constraints. We continue to believe that the study was not a soundly designed cost-effectiveness comparison for supporting IRS’s decision. Our report discusses our reasoning in detail, focusing on the study’s methodological errors, narrow scope, and lack of adherence to guidance for doing such studies. For example, IRS did not do the analysis necessary to generalize the study results to the full PCA case population even though study results could differ significantly when generalized to the full population. Our meetings with staff who performed the independent review and our analyses of their documentation did not change our finding about IRS’s study. IRS agreed with our two recommendations dealing with establishing guidance on analyses to support decisions to initiate, renew, or expand a program and policies to ensure documentation of such studies. More specifically, IRS said it would review current guidance and policies and develop additional guidance where needed. IRS agreed in principle with our third draft recommendation on ensuring that a documented methodology and criteria guide IRS’s analysis and decision on whether to include selected PCA-type cases in its collection inventory, but said events have overtaken the need to complete the ongoing study, citing IRS’s plans to implement CDA models in fiscal year 2011. These models are intended to select cases with the best potential for collection action in one of IRS’s work streams. IRS said that to measure the impact of the PCA-type cases, as was the plan when the PCA project was terminated, is no longer necessary. We had discussed with IRS officials the continued need for the pilot study when IRS told us in July 2010 that it planned to implement CDA in January 2011. IRS officials, including the Acting Director, Collection Business Reengineering, said that while CDA selection would focus on collection potential and not type of case (i.e., PCA-type), the pilot study of approximately 19,000 PCA-type cases might provide data useful for improving CDA models. Officials affirmed that they initiated the pilot study because the PDC study showed that PCA-type cases might have high collection potential at low cost. Accordingly, our draft report recommended that IRS document the methodology and criteria for its pilot study. Information provided in IRS comments on the report and in response to our subsequent questions suggests that whether and how PCA- type cases may be selected for active collection inventory is uncertain. Although IRS’s comments on the draft report said that the need for completing the pilot case study was overtaken by the development of the CDA models, in separate technical comments IRS officials said they were continuing to work the pilot cases and provided no indication that they would stop working them before CDA is implemented in January 2011. Further, in response to our question about whether PCA-type case results were used in developing the CDA models, an IRS official said that they had not been used. In response to IRS’s comments and absent evidence that CDA will be implemented as planned and that its models will include IRS’s experience in attempting collection of PCA-type cases, we revised the third recommendation to better focus on ensuring that PCA-type case results are considered and incorporated as appropriate into the CDA models. Further, if IRS determines completing the pilot study is the best means to ensure that PCA-type case results are considered for the CDA models, we maintained our recommendation that IRS ensure that the study has a documented methodology and criteria to guide IRS’s analysis and decision. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact named above, Tom Short, Assistant Director; Ray Bush; George Guttman; Lois Hanshaw; Ronald W. Jones; Veronica Mayhand; Ed Nannenhorn; Karen O’Conor; and Cynthia Saunders made key contributions to this report.
In September 2006, the Internal Revenue Service (IRS) started the private debt collection (PDC) program for using private collection agencies (PCA) to help collect some unpaid tax debts. Aware of concerns that PCAs might cost more than using IRS staff, IRS began studying the collection costs and performance of PCAs and IRS. In March 2009, IRS announced that it would not renew its PCA contracts based on the study and announced plans for increasing collection staffing. As requested, GAO is reporting on whether (1) the study was sound as primary support for IRS's PDC decision and (2) IRS has planned or made changes to its collection approach based on its PCA experience and PDC study. GAO compared IRS's study to federal and other guidance on what should be included in analyses to support program decisions and analyzed IRS's changes given expectations that IRS would consider PCAs' best practices. IRS's comparative study of the PDC program was not soundly designed to support its decision on whether to continue contracting out debt collection. Although the study was not originally intended or designed as primary support for the decision, IRS officials nonetheless used it as such. IRS did not have guidance for program managers on the type of analysis that should be done to support decisions to create, renew, or expand programs. IRS had not retained sufficient documentation on the sample used in the study or documented some analyses that would have been helpful if performed. The study results may be overstated or understated because the study sample was not generalizable to the program as a whole. The study had a narrow objective of comparing results for IRS working the same cases as PCAs had, and as a result, the study design did not consider other factors recommended by Office of Management and Budget and other guidance on conducting program analysis. For example, the study did not analyze alternatives to program scale, such as expanding it or scaling it back. Program analysis guidance states that to the extent possible all costs and benefits should be counted and alternative means of achieving a program's goals should be considered. But the study did not identify important costs and benefits, such as whether taxpayers' compliance costs would be different if IRS or PCAs work debt cases. Nevertheless, neither GAO nor IRS officials know whether the study results and decision on the program would have differed significantly if it had been designed to be primary support for IRS's PDC program. In commenting on a draft of GAO's report, IRS disagreed that the PDC study was not soundly designed. GAO stands by its analysis detailing the study's errors, narrow scope, and lack of adherence to guidance. These design and methodology deficiencies limited the study's usefulness in supporting IRS's decision. IRS has not made or planned changes to its collection approach based on its PCA experience and study. In authorizing the use of PCAs, Congress required IRS to report to Congress its measurement plan to identify any of the PCAs' best practices that IRS could adopt to improve its own collection operations. IRS did not continue to report to Congress as required. In an unpublished draft report, IRS asserted that it had reviewed a number of PCA practices and found no immediate opportunities to change its collection approach. IRS did not provide GAO documentation on the study to support that conclusion. In part because PCA-type cases had previously been considered low priority, IRS officials were surprised by the PDC study results, which indicated that IRS staff might have better results working PCA-type cases than some of the cases IRS normally works. IRS officials said that they initiated a pilot study in 2009 to help them decide whether to use IRS staff to work selected types of PCA cases. As GAO concluded its review, IRS provided conflicting information on the role of the pilot study. On one hand, IRS said a collection selection system to be implemented in January 2011 overtook the need for the study. On the other hand, an IRS official said that the results from PCA-type cases were not used in the development of the new case selection system. GAO recommends that IRS (1) establish guidance on analyses to support program decisions, (2) establish a policy requiring documentation of program studies, and (3) ensure that PCA-type case results are considered for IRS's new case selection model. IRS agreed with the first two recommendations and agreed in principle with the third, which GAO revised to reflect updated information that IRS provided.
Biocontainment laboratories—designed with specific environmental, storage, and equipment configurations—support containment efforts in the day-to-day work with biological agents. These labs are designed, constructed, and operated to (1) prevent accidental release of infectious or hazardous agents within the laboratory and (2) protect lab workers and the environment external to the lab, including the community, from exposure to the agents. For example, the biological safety cabinet (BSC) is laboratory safety equipment that is used when manipulating infectious organisms. BSCs are enclosed cabinets with mechanisms for pulling air away from the worker and into a HEPA filter, which provides protection for the worker and prevents releases into the environment. BSCs might be designed with a limited workspace opening, or they might be completely enclosed with only gloved access and air pressure indicators to alert users to potential microbial releases. The selection of the BSC would depend on the (1) lab’s risk assessment for the specific agent and (2) nature of work being conducted, as guided by the Biosafety in Microbiological and Biomedical Laboratories (BMBL), and other relevant guidance, such as OSHA regulations and National Institutes of Health (NIH) guidelines for research involving recombinant DNA. There are four biosafety levels (BSL). These levels—consisting of a combination of laboratory practices, safety equipment, and laboratory facilities—are based on the type of work performed, information about the infectious agent, and the function of the laboratory. These levels include combinations of laboratory practices and techniques, safety equipment, and facilities that are recommended for labs that conduct research on infectious mircro-organisms and toxins: Biosafety level 1 (BSL-1) is suitable for work with agents not known to consistently cause disease in healthy adults and present minimal potential hazard to laboratory personnel and the environment. Biosafety level 2 (BSL-2) is suitable for work with agents that pose moderate risks to personnel and the environment. Biosafety level 3 (BSL-3) is suitable for work with indigenous or exotic agents that may cause serious and potentially lethal disease, if inhaled. Biosafety level 4 (BSL-4) is required for work with dangerous and exotic agents that pose a high risk of life-threatening disease or have aerosol or unknown transmission risk. Examples of agents and toxins used within these labs include those that primarily affect humans and animals, such as Botulinum neurotoxin, a naturally occurring poison, lethal to humans and animals, but used for medical and cosmetic purposes in drugs such as Botox; animals, such as foot-and-mouth disease (FMD), a highly contagious viral disease of cloven-hoofed animals—such as cattle, swine, and sheep—that causes debilitation and losses in meat and milk production (while FMD does not have human health implications it does have severe economic consequences); and plants, such as certain varieties of Xylella Fastidiosa, which can kill citrus plants, but does not have human health implications. Lab levels can also vary depending on their use. For example, research that involves animal or plant pathogens may be designated as animal biosafety levels (ABSL) 1-4 or BSL-3-AG. Similarly, some people may refer to BSL-3 labs as “high-containment” labs and BSL-4 labs as “maximum containment” labs. There are also several types of labs—including clinical, research, teaching, public health (or reference), and production (or commercial)—which are generally categorized on the basis of the work conducted. While these labs all involve work with infectious micro- organisms, there are regulatory, accrediting, and risk differences associated with each type. For example, clinical labs within hospitals test patient samples and may often be unaware of the micro-organism they are handling until their tests have identified it. In contrast, research, reference, and production (commercial) labs, while they each have different purposes and environments, tend to be aware of the micro-organisms they are handling. Clinical labs also have specific accrediting and state reporting requirements, and their control structure for handling illnesses is different from other types of labs. We use the general term “biological lab” to include biological labs of all levels or types that handle micro-organisms or clinical samples. We use this general and inclusive term because SRSs could be used in any environment with safety risks, including different types or levels of labs. However, this does not necessarily imply that a single SRS is appropriate or applicable to all labs of varying type or level, although an SRS that encompasses the largest view of a domain as possible has significant advantages. For example, one national SRS would provide information that can cross boundaries where common and similar practices exist and avoid the “stove-piping” of safety information. Many different federal agencies have some connection with biological labs. Such agencies are involved with these labs in various capacities, including as users, owners, regulators, and funding sources. The CDC and APHIS regulate entities that possess, use, and transfer select agents and toxins. In addition, entities are required to report the theft, loss, or release of any select agent or toxin to the CDC or APHIS, although we had found reporting failures at some labs subject to this requirement. Along with environmental, storage, and equipment configurations, various guidelines for lab practices support worker and public safety. These biosafety guidelines offer general and agent-specific containment and risk assessment practices. For example, the BMBL suggests microbial practices, safety equipment, and facility safeguards that vary by type of agent and intended use. These documents are updated periodically—the BMBL is currently in its fifth edition—in order to “refine guidance based on new knowledge and experiences and to address contemporary issues that present new risks that confront laboratory workers and the public health.” While the BMBL and other guidelines are useful for promoting safety, they also recognize there are unknown and emerging laboratory safety risks and that ongoing efforts to gather information about those risks is essential for continued safety improvement. One of the key information sources for these updates is published reports of LAIs. However, it is widely recognized that these reports reflect only a fraction of actual LAIs. To develop evidence-based guidelines and safety-improvement initiatives, other industries with inherent risks to workers and the general public— such as aviation, commercial nuclear power, and health care—collect and analyze safety data. These data can come from safety events. Safety event levels—depicted in terms of a risk pyramid (see fig. 1)—increase in severity as they decrease in likelihood. Whether and where the lines are drawn—between accidents (fatal or nonfatal), incidents, and hazards— varies (1) across industries and (2) according to whether the safety event resulted in no ill effects, minor injuries, or severe injuries or deaths. Events at the top of the pyramid—generally identified as “accidents” (sometimes further divided depending on fatality)—have significant potential for harm or result in actual harm to one or more individuals. These events can include radiological exposure, industrial chemical spills or explosions, airline crashes (with or without loss of life), patient medication errors that result in illness or death, and LAIs. Accidents— especially fatal ones—are generally infrequent, hard to conceal, and often required to be reported. Events at the center of the risk pyramid— generally referred to as “incidents”—are those that could have resulted in serious harm but did not. Incidents occur more frequently than accidents and include near misses, close calls, or other potential or actual adverse events and violations, although definitions vary within and across industries. For events at the base of the pyramid—generally referred to as “hazards”—no incident or accident need occur. These events include observations about the work environment, procedures, equipment, or organizational culture that could be improved relative to safety. Safety data from accidents, incidents, and hazards provide the source information for analysis of accident precursors—the building blocks of events that can lead to injury or death. The focus on precursor data arose as a result of the limited amount of data that could be identified from accident investigations. Such data are often “too sparse, too late and too statistically unreliable to support effective safety management.” In addition, the severity and sometimes fatal consequences of accidents often preclude investigators from gathering sufficient detail to fully understand systemic (as opposed to individual) causes of the accident. Incident data are a particularly rich source of precursor information because incidents occur more frequently than accidents. Moreover, incidents do not often rise to the level of regulatory or legal violation because no serious harm has occurred. Workers are therefore generally less fearful of punishment in reporting their mistakes at this level. Collection of safety data and analysis of accident precursors focus on trying to identify systemic, rather than individual, causes of error. Industries often take this system-based approach to risk management because they recognize that “blaming problems on ‘human error’ may be accurate, but it does little to prevent recurrences of the problem. If people trip over a step x times per thousand, how big must the x be before we stop blaming people for tripping and start focusing on the step?” The system-based approach focuses on analyzing accident precursors to understand “how and why the defenses failed.” According to this approach, blaming individuals for accidents—as in the person-based approach—not only fails to prevent accidents, but also limits workers’ willingness to provide information about systemic problems. When precursor information from accidents, incidents, and hazards are analyzed as part of a system, evidence-based, industrywide safety improvements are possible. For example, analysis of reports of health care workers improperly medicating patients has helped identify and address systemic problems with medication labeling and storage. In such cases, hospitals could have punished an individual for the error. Instead, they focused on learning rather than blame, which encouraged worker reporting and led to needed changes in medication labeling and storage. This, in turn, improved patient safety because any health care worker—not just the one that reported the error—will be less likely to improperly medicate patients in the future. SRSs—both mandatory and voluntary—are the key tool for capturing detailed safety data. Many industries have recognized that the costs of repeated accidents or managing the aftermath of an accident can far outweigh the costs to establish and maintain a reporting system. Despite vast differences across industries, the sources of risk—humans, technology, and environment—are the same. Consequently, the tools— such as SRSs—that industries other than biological labs use to understand these risks can also support evidence-based, industrywide biosafety improvement efforts. This is especially significant in understanding the risks in biological labs because current biosafety guidelines are based on limited information. While individual states or labs may have reporting mechanisms, no formal system exists for sharing data among all labs. In addition, while data reported through academic journals or state disease registries is accessible industrywide, there are significant reporting barriers. For example, before information about an incident becomes available to others through academic publications, infections must be recognized as laboratory-acquired, deemed scientifically interesting, written up and submitted for peer review, and accepted for inclusion in an academic journal. Furthermore, concerns about losing funding or negative publicity can create barriers to an institution’s willingness to encourage publication of LAI information. Reports of infections through state disease registries are also limited because information about the source of the infection is generally not collected and not all infectious diseases are required to be reported. In addition, the infected individual must see a health practitioner who recognizes the status of the disease as reportable and takes steps to report it. Finally, releases without infection—or without recognized infection as a result of a release—are unlikely to be reported at all, despite the valuable precursor data that could be gleaned from the event. A system for collecting safety data from across the lab community has been proposed as a means to improve the evidence base for biosafety guidelines. However, as indicated by reporting lapses to the mandatory system for theft, loss, and release of select agents, implementation of a reporting system does not immediately create a highly useful one, to which all workers instantaneously submit data on their errors. Finally, when initiating any reporting system, it is important to consider up front and throughout a myriad of design and implementation issues so as to ensure the system is operating as effectively as possible. Consequently, we look to research and experience to inform design and implementation choices. According to lessons from our review of the literature, the design and implementation of an effective safety reporting system (SRS) includes consideration of program goals and organizational culture for decisions in three key areas: reporting and analysis, reporter protection and incentives, and feedback mechanisms. Each of the key areas contains subcategories of related decision areas, which should also tie into program goals and organizational culture. Figure 1 illustrates the relationship among program goals, organizational culture, and the three key areas with associated areas with associated subcategories. subcategories. A program can have a variety of goals in the design and implementation of an SRS, apart from the primary goal of improving safety, according to the literature. For example, an SRS can be used for regulatory purposes or for organizational learning—a distinction that will fundamentally affect design decisions, such as whether reporting will be mandatory or voluntary, what types of reporter incentives and protections should be included, who will analyze SRS reports, and what feedback will be provided. An SRS can be designed and implemented to meet a variety of subgoals as well. Subgoals can include capabilities for trend analyses, accountability improvement, liability reduction, and performance indicators. The overall goals and subgoals should be determined in advance of design decisions, so that decisions in the three key areas support program goals. Identification and agreement on program goals is best accomplished through the involvement of appropriate stakeholders, such as management, workers, industry groups, accrediting bodies, and relevant federal entities, according to the literature. Even with well-defined goals, the success of any SRS is intertwined with the organizational culture in which it will operate. Organizational culture—the underlying assumptions, beliefs, values, attitudes, and expectations shared by those in the workplace—affects implementation of programs in general and, in particular, those designed to change that underlying culture. SRSs are fundamentally tools that can be used to facilitate cultural change—to develop or enhance a type of organizational culture known as a culture of safety. A culture of safety implies individual and organizational awareness of and commitment to the importance of safety. It also refers to the personal dedication and accountability of all individuals engaged in any activity that has a bearing on safety in the workplace. Development of a positive safety culture often involves a shift in how workers view and address safety-related events. This shift is supported by data on safety-related events provided by SRSs. Accordingly, an environment in which workers can report safety events without fear of punishment is a basic requirement for a safety culture and an effective SRS. In addition, an important consideration in design and implementation is where on the safety culture continuum an organization is currently positioned and where it would like to be positioned. It is unlikely that workers would report safety events in organizations with punishment-oriented cultures—where workers are distrustful of management and each other. To promote reporting in such environments, systems can be designed with features that help alleviate these worker concerns. However, understanding where the organizational culture is in relation to reporting is essential for choosing system features that will address these concerns. Changing organizational culture is also generally recognized as a long-term effort that takes at least 5 to 10 years. In high-risk industries, reporting systems are often developed in conjunction with other efforts to make safety a priority, and as the culture changes from these efforts, so might the reporting system to reflect the changing culture. For example, as safety events become more visible or well-defined, reporting forms or requirements can be modified to reflect this new understanding. Similarly, if reporting is waning but safety events continue to occur, adjustments to reporting incentives, definitions of events, and other features may be necessary to improve reporting. Such ongoing assessment of organizational culture can also help identify areas where system adjustments are needed and support efforts to evaluate the contributions of the SRS to safety culture improvement. As with any tool for cultural change, the value of the SRS will be commensurate with the investment in its use. If an SRS is to support overall safety improvement, training, outreach, and management support are necessary to instruct staff in the desired culture and use of the new system. Lessons from the literature on the role of program goals and organizational culture in SRSs include the need to define overarching program goals and subgoals up front; involve stakeholders (e.g., management, industry groups, associations, and workers) in developing program goals and designing the SRS to increase support among key populations; assess the organizational culture to guide system design choices in the three key areas; and ensure that reporters and system administrators receive adequate training regarding the function and application of the reporting system. Among the first design decisions for an SRS are those that cover reporting and analysis. Decisions in this key area include basic questions about the (1) level of event that should be reported to the system, (2) classification of events, (3) report format and mode, (4) management of reporting, and (5) analysis of the reported data. The severity of events can vary from safety concerns to mass casualties, and what is considered a “reportable event” has implications for whether reporting should be mandatory or voluntary. Mandatory reporting is generally preferred when program goals are focused on enforcement. Serious events—such as accidents resulting in injuries or deaths—are typically the level of event collected in mandatory SRSs. Mandatory reporting is also generally preferred where there is potential or realized association with injury or death and related regulatory and legal implications, as in accidents. Voluntary reporting is generally preferred when the program goal is learning—identifying actions, processes, or environmental factors that lead to accidents. Voluntary reporting in these cases is more appropriate because the goal is improvement rather than compliance. Events at the incident level—errors without harm, near misses, close calls, and concerns—are less serious than accidents and are typically collected through voluntary SRSs. Both mandatory and voluntary reporting systems are often employed concurrently—sometimes independently and sometimes in complementary roles—because programs face the dual requirements of regulating and promoting safety improvement. The level of event to be reported also depends on the organizational culture. Industries new to safety reporting—in particular, those in which the definition or recognition of an accident is unclear—may find it particularly difficult to identify a reportable incident or hazard. If the reporting threshold is set too high, significant safety hazards may go undetected and unreported. In such environments, a low initial threshold for reporting might be helpful, raising it over time as workers develop familiarity with reportable events. However, because of the greater frequency of incidents and safety concerns, voluntary SRSs can be overwhelmed by the volume of submitted reports. SRSs that focus on a particular type of incident or hazard area may help to counteract this problem. In addition, if the reporting threshold is set too low, reporters may feel events are too trivial for reporting and that the SRS has little value. For example, surveys of nurses and doctors have shown a range of opinions that constitute a barrier to reporting, including beliefs that not all near-miss errors should be reported or that reporting close calls could result in significant change. The prevalence of these beliefs may reflect that a “reporting culture”—one in which staff recognize and submit reportable events—is not fully established. Lessons from the literature on determining the level of event for reporting include the need to base the decision for mandatory or voluntary reporting on (1) the level of event of interest and (2) whether the SRS will be used primarily for enforcement or learning and set reporting thresholds that are not so high that reporting is curtailed, but not so low that the system is overwhelmed by the number and variety of reportable events. To facilitate data-sharing across the organization or industry, classification schemes provide standardized descriptions of accidents, incidents, and concerns. Effective classification schemes can facilitate safety improvement across organizations and industry by providing a common language for understanding safety events and precursors. For example, if several hospitals use a standard classification scheme to submit incident reports to a patient SRS, the resulting data can be used to examine incident data across hospitals. Such data allow benchmarking of similar occurrences and promote a better understanding of core hazards that exist across an industry. Clearly defined and familiar classification terminology can also help workers understand when and what to report. However, achieving a well-defined and clear classification scheme—especially one that can be used across an industry—can be difficult because different groups within an organization or across an industry may classify events differently. For example, one study on medical error reporting found that nurses classify late administration of medication as a medical error, whereas pharmacists do not. Classification schemes should be broad enough to capture all events of interest, but also well-defined enough to minimize receipt of extraneous information. For example, organizational learning systems, like FAA’s NASA-run Aviation Safety Reporting System (ASRS), include a broad definition of safety-related events to facilitate voluntary reporting of all events. Alternatively, mandatory systems may include a more specific classification scheme to capture deviations from standard operating procedures. However, overly restrictive schemes may lead workers to focus on certain events and neglect to report others. For example, if a classification scheme is developed to consider only compliance with an industry’s standard operating procedures, workers may not report safety- related incidents that involve factors other than compliance. Similarly, overly detailed classification schemes may be confusing for reporters if they do not know the appropriate codes to apply. In addition, a classification scheme must be clear enough for workers to understand what counts as a reportable incident. Otherwise, underreporting or misreporting of incidents may result. If possible, use of pre-existing industry-specific terminology in the classification scheme can support information flow across the industry and help workers—especially in industries new to safety reporting—adapt to the SRS. Lastly, a classification scheme may require the flexibility to allow different sites to adapt fields and elements to match their own program goals and organizational cultures. Design of a classification scheme may incorporate several strategies, including (1) using an existing classification scheme from another SRS, (2) modifying an existing classification scheme for use in a new SRS, (3) developing a classification scheme based on incident reports from the new or a similar SRS, or (4) using experts to develop a classification scheme. Lessons from the literature on designing classification schemes and associated terms include the need to develop classification schemes and associated terms that are clear, easy to understand, and easy to use by drawing on terms already well understood in the industry; test whether classification terms are clearly understood by different groups in the organization; allow sufficient flexibility to (1) avoid narrowing the scope of reporting in a way that limits all events of interest at the chosen level of event, (2) allow different sites—if multiple sites will be reporting to the same system—to adapt fields and elements to match their own organizational culture, and (3) capture different types of events and precursors, as they can change over time; and develop a classification scheme that best suits the analytical requirements and the comfort level of the organizational culture with safety reporting and safety event terms. Reporting must be readily accessible and allow for sufficient description of safety events without overburdening reporters with extensive narrative requirements. Data collection considerations include the format of the report (that is, the types of questions included on the reporting form) and the mode of the report (that is, how a report is physically submitted to the SRS, for example, by paper or Internet). Both the report format and mode can affect the incentive to report; the ease of reporting; and the type, quantity, and quality of data collected. Decisions regarding the format and mode of reporting are closely tied to the type of data desired from the SRS and the organizational culture. Report formats affect the quantity and quality of reports. For example, question formats that allow workers to explain the incident through narrative description may yield extensive details about the incident. The literacy skills of the reporting population are important considerations as well. Long narratives might be simple for the highly educated but intimidating to those with less writing proficiency. However, if workers are resistant to reporting, structured question formats that use check- boxes or drop-down boxes with categories may decrease the time it takes to complete an incident report and thereby increase the incentive to report. Using structured question formats will also decrease the amount of coding and qualitative analysis that must be performed to examine the data. One limitation of structured question formats, however, is that in industries new to safety reporting, classification terms may not be well developed or understood by the reporting population. Options for SRS modes include paper, telephone, or electronic or Web- based form. Although Web-based forms may increase the ease with which data are collected, workers may be fearful of entering incident reports using a Web-based form because reports can be traced back to them. If workers perceive that the culture is punitive, mail reports—especially to an outside entity that manages the system—can be the most effective mode choice to alleviate these concerns. However, accessibility of reporting forms can also affect the likelihood of reporting. For example, if paper forms are outside the immediate work area and require effort beyond the normal routine to complete, then reporting may be curtailed. Since many workers have ready access to the Web, a combination of Web and mail reporting may address both access and sensitivity concerns. Lessons from the literature on format and mode choice include the need to base decisions about report formats on (1) the type of data needed for analysis, (2) capabilities of the reporting population, and (3) maturity of existing safety event classification schemes within the industry and base decisions about report mode on (1) the accessibility of the mode to the reporting population and (2) workers’ concerns about and willingness to report. Reporting management includes decisions about SRS administration— who will collect, analyze, and disseminate reports—as well as decisions about who is allowed to submit reports. The choice of the entity responsible for collecting, maintaining, analyzing, and disseminating may affect the willingness of workers to submit reports. For example, if workers perceive a punitive organizational culture or a lack of confidentiality, they may be unwilling to submit reports to an SRS within the workplace. An SRS managed by an independent, external entity might alleviate these concerns. However, an organization may have better awareness than an outside entity of internal safety issues, expertise in analyzing and addressing them, and mechanisms for encouraging participation in safety reporting. Consequently, decision makers must weigh a variety of culture-related and resource considerations in deciding how to administer an SRS. The openness of reporting—whether reporting is available to all workers or only to those in select occupations or positions—will also affect the type and volume of data collected. For example, many individuals— including pilots, ground crew, and controllers—can submit reports to FAA’s NASA-run ASRS, whereas only airlines can submit reports to the Voluntary Disclosure Reporting Program (VDRP). An open SRS, which accepts reports from different staff levels or occupations, offers the potential for analysis of events from several perspectives. However, such an SRS may be subject to staff hierarchies that can limit reporting among certain employee groups or professions. For example, in the medical industry, even when reporting is open to both doctors and nurses, several studies have shown that nurses have a greater awareness of and are more likely to submit reports to an SRS than doctors. Similarly, reporting may be attenuated if events must be reported up a chain of command, rather than directly by those involved in an event. Direct reporting—regardless of position or occupation—can increase the likelihood of reporting on a particular event. Lessons from the literature on system administration and the reporting population include the need to base the decision for internal or external system administration on (1) workers’ degree of concern over punishment and confidentiality and (2) availability of internal expertise and resources to analyze and encourage reporting and base decisions about who will be allowed to report on (1) awareness of reporting hierarchies and (2) the type of information desired for analysis. Analytical processes that focus on identifying safety improvements—using report prioritization, data-mining techniques, and safety and industry experts—can enhance the usefulness of reported information. Frequently, the first step in analyzing reported data is determining whether immediate action should be taken to address a safety concern. Subsequently, analyses that explore why a particular event may have occurred—such as root cause analysis—may be used to understand the contributing factors to safety events and to design solutions to the problem. Data-mining techniques, including those that combine safety reports with other databases, can also be used to look for patterns of events across organizations or a broad range of reports. Data mining requires the capability to search for clusters of similar events and reports that share common characteristics. Technical expertise, as well as specialized software, access to other data sources, and data format requirements, affects data-mining capabilities. For example, data-mining searches may be more complicated when error reports include both structured and open text (narrative) formats because open text must be made suitable for data mining. In addition to these retrospective analytical techniques, probabilistic risk assessment methods may also be used as a proactive approach to examine all factors that might contribute to an event. Literature on SRS use in industries, such as nuclear power and aviation, advocate using a combination of these approaches to provide a more thorough analysis of reported data. Finally, using data analysis techniques to prioritize incident reports can facilitate analysis by identifying which reports require further analysis or demand immediate review because they represent serious safety concerns. Because analysts must have the technical skills and relevant knowledge to make sense of the data, decisions about the analysis will be linked with system administration and whether technical and industry expertise reside within the organization. Thorough analysis may require multidisciplinary committees that contribute a variety of expert perspectives, but the breadth of expertise required may not be readily available within an organization. For example, analysis of medication error reports may be conducted through multidisciplinary committees that include physicians, nurses, pharmacists, quality managers, and administrators. In the airline industry, an event review team (ERT), consisting of representatives from the air carrier, the employee labor association, and the FAA, is used to analyze reports as part of the Aviation Safety Action Program (ASAP). Lessons from the literature on analytical process include the need to use a report prioritization process to quickly and efficiently address key safety issues as they arise and align analysis decisions with (1) report formats, (2) system administration and location of technical expertise, and (3) availability of other relevant data needed for analysis. SRSs—whether mandatory and voluntary—depend on the willingness of workers to report mistakes they or others have made. It is unlikely that workers would take the risk of reporting without protections that provide confidence that their reports will be kept private and incentives to report their errors. There are a variety of ways to design SRSs to protect the identity of the reporter and to encourage reporting, including (1) accepting anonymous reports, (2) providing effective confidentiality protections on reported data, and (3) deidentifying data sets. The principle reporting incentive is limited immunity—whereby workers are granted protection from certain administrative penalties when they report errors. There are advantages and disadvantages to anonymous and confidential reporting, and decisions about which to use should be guided by program goals and culture-related considerations. Anonymity—reporting without identifying information—protects reporters against legal discovery should the data be requested in a subpoena. Because an individual’s name is not tied to an incident report, anonymity may lower the psychological barrier to reporting, including fears about admitting a mistake or looking incompetent, disclosure, and litigation. Anonymity may be critical in motivating reporting among workers in an organizational culture seen as punitive, especially when legal protections for reporter confidentiality may not be feasible or well established. Report mode is also linked with reporter protection choices. For example, one SRS for medication errors was developed as a paper- based system because administrators felt any electronic system could not be truly anonymous. Despite the protection anonymity offers reporters, there are distinct disadvantages, including the inability to obtain clarification or further information from reporters. This limitation may compromise the integrity of system data because investigators have no means for validating and verifying the reported information. In addition, anonymous data sets tend to be less detailed than identified data sets. Initial reports from identified data sets can be supplemented by follow-up interviews with reporters. The need to follow up with reporters may also make anonymous reporting unfeasible, even in organizations where significant reporting concerns exist. Anonymous reporting also tends to limit the number of data elements that can be derived from reports, making these data sets less useful than others, particularly when trying to identify patterns of error. For example, if fields that could identify reporters—such as occupation, location, and position—are not collected, statistics on safety events across organizational subunits or occupations would be impossible. Another disadvantage of anonymity is that reporters cannot be contacted for clarification or to provide direct feedback—a useful technique for obtaining worker buy-in to the system. If reporters are given specific feedback on actions taken to address issues brought up in their reports and the outcomes of these actions, then reporters are more likely to (1) attribute value to the SRS and (2) continue submitting reports. Some SRSs have addressed this problem by offering a compromise. Reporters can receive a unique identification number that allows them to track the progress of their reports through the SRS. However, if reporters are mistrustful enough that anonymous reporting is necessary, they may not feel comfortable using an optional identification number provided by the SRS. Even anonymity may not be enough to alleviate reporters’ fear of retribution. Other disadvantages of anonymous reporting include the potential for (1) workers to falsely report on the behavior of others in the absence of report validation and (2) managers to discredit information about concerns or incidents as reports of “troublemakers.” Yet another disadvantage is the inability to maintain anonymity in small reporting populations or where the circumstances surrounding an incident are so specific (to an organization, individual, date, and time) that any mention of them would disclose the parties involved. Confidential reports allow investigators to follow up with reporters to gain a better understanding of reported incidents because the link between the reporter and report is maintained. However, fear of providing identifying information may limit reporting. Confidentiality is accomplished through legislative, regulatory, or organizational provisions to protect reporter privacy. Such provisions can include exemptions from subpoena or disclosure, protections against civil or criminal lawsuits for reporting, or criminalizing confidentiality breaches. For example, some state-based mandatory SRSs for medical errors include statutory provisions that protect reporters from some potential legal liability. One international aviation SRS has legislation making confidentiality breaches a punishable offense. Maintaining identifying information enables data analysis across professions and organizations, which can aid in benchmarking. Such information can reveal whether recurring incidents indicate problems within a specific organization or profession as opposed to those that are industrywide, thereby targeting interventions to areas in greatest need. Reporting formats may be less burdensome for confidential systems than for anonymous systems, which must gather all details up front. Confidential reporting allows investigators to gather significant information through follow-up interviews, so less detail needs to be provided on the reporting form. In the literature, report follow-up was associated with a variety of positive results. For example, it can (1) add to reporters’ long-term recall of the event, enhancing the quantity and richness of information collected; (2) support event validation and clarification; and (3) bring closure to an incident and assure reporters their information is being taken seriously, thus increasing the likelihood of future reporting. A potential disadvantage of a confidential SRS is that workers may be fearful of the consequences—real or implied—of reporting. Moreover, for systems untried by the legal system, the surety of confidentiality provisions can be—in reality or perception—tenuous. For example, the Applied Strategies for Improving Patient Safety (ASIPS) is a multi- institutional reporting system designed to analyze data on medical errors and is funded by the Agency for Healthcare Research and Quality (AHRQ). This voluntary SRS for patient safety events relies on confidential reports provided by clinicians and office staff. While this reporting system promises reporters confidentiality within the system, the program can offer no protection against potential legal discovery. However, because ASIPS is funded by AHRQ, ASIPS reporters would be protected by the confidentiality provision in AHRQ’s authorizing legislation, although the protections provided by this provision have never been tested through litigation. Because of the uncertainty of confidentiality protections, administrators of ASIPS chose to build strong deidentification procedures—removal of identifying information from reported data—into the system rather than rely solely on confidentiality protections. Another potential disadvantage of confidential SRSs is that costs may be higher than an anonymous system if follow-up interviews with reporters are part of SRS requirements. Sufficient resources are required for investigation and follow-up with reporters; however, resource constraints may limit these actions. Additional resource commitments (in the form of follow-up interviews) are also assumed by those who submit confidential reports. Data deidentification supports confidentiality provisions since the deidentification process makes it difficult to link reports to specific individuals or organizations. Deidentification can also support feedback mechanisms because the data can be readily shared within and across organizations and industries. Data can be deidentified at the source or in summary reports and data systems. Source deidentification involves removal and destruction of all identifying information from reports after follow-up and investigation have been completed. Secondary data deidentification involves removal of identifying information in summary reports or databases for sharing safety information and alerts. Deidentification of source reports strengthens confidentiality protection because records are unavailable even if they are subpoenaed. Source report deidentification may require (1) technical solutions if reports are collected electronically and (2) special processes if collected in another format. Eliminating the link between the reporter and the report can help reinforce the confidential nature of an SRS and provide an incentive for reporting, as long as the process for deidentification is understood by the reporting population. Deidentified data can be readily shared within or across organizations and industries, enhancing analytical possibilities by increasing the number of reported incidents available for analysis. Limited immunity provisions can increase the volume of reports, particularly when there are emotional barriers, such as fear about reporting one’s mistakes. These provisions offer protection from certain legal or regulatory action if certain requirements are met. For example, the ASRS offers limited immunity from enforcement actions provided certain requirements are met and the incidents do not involve criminal or negligent behavior. The literature suggests that the immunity provisions offer a strong incentive to report and that pilots would not submit ASRS reports if these provisions did not exist. Numerous international SRSs also contain immunity provisions, including the Danish aviation SRS and patient care SRSs in both Australia and Israel. Lessons from the literature on choosing reporter protections and incentives include the need to base the choice between anonymity and confidentiality on (1) organizational culture, especially workers’ degree of concern about punishment and confidentiality, and (2) the amount of detail required for analysis and whether it can be collected without follow- up; consider hybrid systems in which confidential and anonymous reporting are used simultaneously if there is a conflict between organizational culture and data need; develop data deidentification measures to support confidentiality and data-sharing efforts; and consider limited immunity provisions to increase the reporting incentive. Because a primary SRS function is safety improvement, the system must include feedback mechanisms for (1) providing actionable safety information to the relevant populations and (2) improving the SRS through identification of reporting gaps across occupations or locations and evaluation of the effectiveness of the system as a safety tool. To support its primary function of safety improvement, an SRS must include feedback mechanisms for providing actionable safety information to the relevant populations. A variety of populations can benefit from SRS feedback, including (1) reporters, (2) managers, (3) organizations and the industry at large, and (4) system administrators. Feedback to reporters is essential in order to promote safety and reinforce the benefits of reporting. If workers who report safety events do not see any evidence that their report has been used, they may question the value of the system and discontinue reporting. Feedback among managers promotes management awareness of safety concerns, management buy-in, and top-level efforts to address those concerns. Feedback across the organization or industry can provide tangible evidence of the value of the SRS by alerting management and workers to important safety issues. Industry feedback can also provide a benchmark to compare safety across similar organizations when data are (1) collected at the local level and (2) compiled in a centralized regional or national database. Use of such benchmarks may help decision makers identify gaps in performance and practices that may improve safety conditions in their own organization. Feedback mechanisms for system evaluation are also important in ensuring the SRS’s continued effectiveness. Feedback on reporting gaps across occupations or locations can help identify nonreporting populations. When these reporting gaps are compared with other data— such as reports from comparable sites—they can help identify areas in need of targeted outreach and training. In addition, feedback from safety culture and system-user surveys, which assess safety and reporting attitudes, can be used to evaluate the effectiveness of an SRS. Performance metrics on safety improvement can be incorporated into these surveys, providing information on the degree to which program goals are being met and identifying areas of needed system improvement. Lessons from the literature on choosing feedback mechanisms include the need to provide direct feedback to reporters to foster worker-specific buy-in provide regular, timely, and routine feedback—for example, in the form of newsletters, alerts, Web sites, and searchable databases—to support overall organizational buy-in for reporting; provide positive feedback to managers who receive a high volume of reports to demonstrate the importance of reporting and counteract the perception that error reporting reflects poorly on management; use the data to identify reporting gaps for targeted outreach and evaluate the effectiveness of the SRS to support ongoing modification and improvement. Lessons from case studies of safety reporting systems (SRS) in three industries—aviation, commercial nuclear power, and health care— indicate the importance of cultural assessment and resource dedication in SRS design and implementation, and suggest certain features in the three key areas. Although the industries differ in type of work, regulation, and ownership, all three face substantial inherent risks to health and public safety and have made significant investments in promoting safety through voluntary SRS programs. Consequently, their experiences suggest lessons that can be applied to the design and implementation of an SRS for biological labs. Collectively, these SRSs reflect 70 years of safety reporting experience. In particular, the FAA’s NASA-run Aviation Safety Reporting System (ASRS) in aviation, the Institute of Nuclear Power Operation’s (INPO®) Significant Event Evaluation—Information Network (SEE-IN®) system in commercial nuclear power, and VA’s internally managed Patient Safety Information System (PSIS) and NASA-run Patient Safety Reporting System (PSRS) in VA health care provide the basis for the following four lessons for SRS design and implementation: 1. Assessment, dedicated resources, and management focus are needed to understand and improve safety culture. 2. Broad reporting thresholds, experience-driven classification schemes, and processing at the local level can be useful SRS features in industries new to safety reporting. 3. Strong legal protections and incentives encourage reporting and help prevent confidentiality breaches. 4. A central industry-level entity facilitates lesson sharing and evaluation. The case studies demonstrate that establishing a robust safety culture is neither quick nor effective without a multipronged effort—involving assessment, dedicated resources, and management focus—to recognize safety challenges and improve safety culture. Despite the costs and challenges of implementing an SRS, the industries recognized they could not continue to operate without safety improvements and their SRSs were a key tool in these efforts. Each of the three industries created its SRS after recognizing that existing operations and safety culture posed an unacceptable risk to workers and the public. In both the aviation and the commercial nuclear power industries, SRS initiation was prompted by serious accidents rather than a proactive assessment of the safety culture. The Veterans Health Administration proactively initiated an SRS program after its administrators and patient safety advocates recognized the need to redesign systems “to make error difficult to commit.” Such assessments can reveal systemic safety culture problems before they become critical. The concept of a voluntary aviation reporting system was suggested in 1975 by the National Transportation Safety Board (NTSB), the FAA, and the aviation industry following an investigation of a fatal airline accident near Berryville, Virginia. The NTSB found that the accident might have been averted if previous crews’ reports about their near-miss problems in that area had been shared. These problems included inadequate aviation maps and the cockpit crews’ misunderstanding related to the air traffic controllers’ terminology. The NTSB reported that the industry culture made it difficult to report these problems. These cultural barriers were apparently known, although a safety culture assessment might have afforded proactive efforts to correct them. As one solution to these problems, the NTSB suggested an aviation SRS, initially managed by the FAA and known as the Aviation Safety Reporting Program. But within a few months, the FAA had received few reports. It therefore transferred operation and management of the program to NASA and renamed it the Aviation Safety Reporting System (ASRS). In 1979, the partial meltdown of a reactor at Three Mile Island (TMI) in Pennsylvania led to the creation of INPO, an industry-initiated technical organization that collects, studies, and shares safety lessons throughout the industry using the SEE-IN program. The INPO program was developed and is managed independently of the Nuclear Regulatory Commission (NRC) regulatory requirements. Although the NRC regulates the safety of commercial nuclear power generation, at the time of TMI, nuclear utilities had been operating with a high degree of autonomy and were fairly insular, according to a 1994 study. The 1994 study of the safety culture at nuclear reactors found that the management style reflected the culture of conventional energy plants—a “hands-off management” and “fossil fuel mentality” that emphasized maximum energy production as the highest value. An industry official explained that the TMI accident was a shock for the industry, which became determined to operate its nuclear reactor facilities safely and reliably, thereby convincing the American public it could be responsible and safe. The entire U.S. commercial nuclear power industry joined INPO within months of the TMI incident, and remains members today. The industry focused early efforts on plant evaluations to understand the culture that had led to the TMI accident. Within a year, INPO produced the first of its Significant Operating Event Reports, which provide information on identified safety problems and make recommendations for improvement. Despite safety advances in the decades after INPO was established, the industry was once again reminded of the importance of safety culture assessment in 2002, when corrosion ate a pineapple-sized hole in the reactor vessel head at the Davis-Besse plant in Ohio. Prior to this incident, INPO had given individual plants the responsibility for assessing their safety culture—assuming that they had a good understanding of it. Investigation revealed that a weak safety culture contributed to the incident. After the Davis-Besse incident, INPO re-emphasized the importance of proactively assessing safety culture before critical safety failures occur. In response to the incident, they recommended that safety culture assessments be a permanent, periodic requirement. After VA hospital accidents that had resulted in harm to patients, the VA established the National Center for Patient Safety (NCPS) in 1999. That unit designed and launched two options for reporting—one internal (the PSIS) and one contracted (the PSRS) to the same NASA center that operates ASRS for the FAA. The VA launched its SRS program guided by a vision emerging in the medical community to “create a culture in which the existence of risk is acknowledged and injury prevention is recognized as everyone’s responsibility.” The VA hired management with experience in NASA’s safety programs, who surveyed safety culture as they initiated the SRS. In addition, the NCPS has conducted three nationwide safety culture surveys, beginning in 2000, to understand the attitudes and motivations of its frontline workers. The most recent, in 2009, allowed the NCPS to identify a subcategory of caregivers for intervention. Safety culture improvement depends on a robust reporting culture, which requires considerable investment of time and resources. As the experiences of the three industries demonstrate and as shown by SRS data from two of the case industries, these investments pay off in an increase, over time, in the volume of safety reports. Figure 3 illustrates time frames and growth in SRS reporting for FAA’s ASRS and the VA’s PSIS. Through conventional classroom and seminar training, workers in some industries learned the terms, goals, and instruments of the new voluntary SRS. Several innovative training opportunities were also marshaled, including on-the-job training and employee loan and training programs focused on improving teamwork. Both types of training supported safety culture change and developed trust in the SRS. Staff time and investment at all levels were necessary to accomplish these training goals. From the inception of ASRS, the volume of aviation safety reports grew slowly, indicating an increasing understanding among reporters of the multiple factors that contribute to safety. However, a 1994 National Academy of Public Administration (NAPA) evaluation, requested by the FAA, found that FAA funding provided to NASA for the operation and management of the ASRS had not kept pace with the work. According to a NASA ASRS official, because resources were insufficient to perform a detailed analysis on all the reports, reports are triaged. Only those deemed most hazardous receive deeper analysis. The NAPA report also noted that the aviation community broadly affirms the safety value of ASRS and uses the data for training and safety awareness. By contrast, some FAA line employees said ASRS was of limited use. As a result of the NAPA report and congressional actions, the FAA modestly increased funding. After the NAPA recommendation to modernize, the ASRS transitioned from paper to electronic report submissions. A recent FAA-sponsored study recognizes the importance of training and retraining all SRS stakeholders, offering best practices for formal and informal training. Reporting has increased. ASRS currently receives about 50,000 reports per year, which demonstrates a sustained level of trust in reporting. However, the study of best practices in FAA’s voluntary reporting options recommended that SRS managers assess the availability of resources and plan for acquiring them, as resource needs are likely to increase over time. In further recognition of the importance of resources to ASRS, the latest Memorandum of Understanding between the FAA and NASA also includes a yearly inflation factor for the ASRS budget. Safety reporting to INPO’s SEE-IN program began in 1980. The volume of reports forwarded to INPO from the plants is between 3,000 and 4,000 annually. Early safety reports tended to focus on technical failures and INPO realized that reporting on human error needed to increase, according to an INPO liaison. Moving beyond reporting equipment failure required significant training. To encourage reporting of both equipment and human factor issues, INPO established and continues to accredit training courses. Recognizing the importance of having staff with industry knowledge to communicate the relevance of safety and reporting in a way that is palatable to industry, INPO began a second wave of hiring of people with nuclear industry experience to ensure the safety science message was managed and communicated in a way that both sides could understand. Despite increases in reporting, however, the Davis-Besse incident in 2002 highlighted the serious consequences of lapses in safety culture. Among other actions, INPO issued its safety principles document in 2004, which provides a framework for assessing safety culture. The document outlines aspects of positive safety culture, such as workers’ questioning attitudes that support reporting and managers’ demonstrated commitment to safety through coaching, mentoring, and personal involvement in high-quality training. Reporting to the VA’s PSIS grew strongly, from 300 incidents reported annually at local hospitals in 2000 to 75,000 in 2005. Yet, the initiation of a voluntary safety reporting system in the VA health care facilities has faced considerable cultural and institutional challenges. For example, one study found the various professions within hospitals disagreed—when presented with scenarios such as late administration of medication—as to whether an error had occurred. In congressional testimony in 2000, we had observed that if the VA hospital system was to implement an SRS, the VA would face a challenge in creating an atmosphere that supports reporting because hospital staff have traditionally been held responsible for adverse patient outcomes. In our 2004 report, we also found that power relationships, such as nurses reluctant to challenge doctors, can be obstacles to patient safety. However, after the first 3 years of the VA health care system’s SRS, the cultural change that supports safety reporting was under way at three of four facilities studied, as a result of experiential training in addition to conventional classroom training. The growth in reported events to the VA SRS over the last 10 years and our 2004 study suggest that the actions that the VA took can be successful in supporting a safety culture and reporting. Experiential—that is, on-the-job—training, in addition to conventional classroom experience, fostered the habit of reporting safety events at many VA hospitals. Since the initial years of the VA’s hospital SRS, clinicians and other VA workers have been selected to participate in the hospital-based analysis of SRS reports so that they could learn how the reports would be used. Once patient safety managers prioritized reports, interdisciplinary teams of hospital staff, including local frontline clinicians, looked for underlying causes and devised systemic fixes. Through this experience, clinicians and other hospital staff saw first-hand the rule-driven and dispassionate search for root causes that resulted in a systemic fix or policy change rather than punishment. We found that (1) this training fostered a cultural shift toward reporting systemic problems by reducing fear of blame, and (2) staff were impressed with the team analysis experience because it demonstrated the switch from blame and the value of reporting close calls. In addition, the VA brought together facility-level workers, including patient safety managers from VA medical centers across the nation, to introduce them to the SRS. Through these seminars, staff were introduced to SRS terms, tools, goals, and potential obstacles. They heard success stories from industry and government, findings from the early VA safety culture surveys, and recent alerts and advisories. To overcome cultural barriers to safety reporting—such as fear of punishment, lack of trust between coworkers and management, and hierarchical prohibitions on communication—management demonstrations of support for the SRS are important. In the three industries, this support was demonstrated through the deliberate use of tactics shown to be effective at changing safety culture and supporting safety reporting such as (1) open communication across the workplace hierarchy encouraged in small group discussions and meetings with managers; (2) storytelling, a tool to direct changes in norms and values; and (3) rewards for participation in safety reporting or open communication in meetings. The three decades of ASRS experience demonstrate the importance of consistent focus versus episodic efforts to publicize and support the SRS. In the early stages of ASRS implementation, the FAA and ASRS staff relied on small group briefings and promotional documents to foster awareness and trust in reporting. For example, the FAA, through its Advisory Circular, notified the aviation community that the system was operational and, along with NASA, issued press releases and conducted briefings about the system. In addition, industry groups and airlines publicly expressed support for the system, and, according to a 1986 NASA report, an advisory group carried “the word about ASRS program plans and accomplishments back to their respective constituencies.” Other early promotional efforts included the distribution of descriptive brochures and posters to operators, FAA field offices, air traffic control facilities, and airline crew facilities. As a result of these efforts, according to NASA’s 1986 report, the number of reports coming into the system in the early years exceeded expectations. However, a NAPA study 8 years later raised concerns about the lack of publicity. That study found that pilots lacked knowledge of the ASRS and the immunity features and questioned the FAA’s credibility. NASA responded with a second promotional surge by (1) publishing its first CALLBACK, a monthly online bulletin, and (2) touring FAA regional headquarters to promote the SRS. However, the NAPA study concluded that the lack of internal FAA support for the ASRS had limited the degree to which FAA uses ASRS data, and led to questioning the legitimacy of ASRS products and activities. That study also found that FAA line officers (with the exception of the Office of Aviation Safety) thought the ASRS had limited utility, and some even suspected bias in reporting as a result of reporters’ interest in earning immunity from FAA enforcement actions. To address these concerns, the FAA has recently been advised to elevate the importance of establishing an initial shared vision among all stakeholders through open discussion and training and sustained promotion efforts. INPO focused on leaders and employee loan programs to change the industry’s safety culture one employee and one plant at a time. Leadership’s demonstrated commitment to safety is a key INPO principle for a robust safety culture. This key principle stems from the philosophy of having “eyes on the problem.” That is, plant managers must be out in the work areas, seeing things and talking to employees in order to reinforce a safety culture. This principle also includes reinforcing standards and encouraging candid dialogue when safety issues arise. Such reinforcement can be in the form of rewards for reporting, such as being congratulated at plant meetings for a “good catch.” Managers also have incentives to encourage workers to report. Following its biannual inspections, INPO summarizes its assessment of the plant’s safety conditions, providing a numeric score, partly based on the robustness of the plant’s SRS. These safety scores are important to plant managers because they can affect regulatory oversight and insurance premiums. Scores range from 1 to 5, with 1 as the top safety rating. While these assessments may result in more attention and assistance for safety improvements, they also instill pride in the plant, and at annual managers’ meetings, managers of plants with the highest scores receive recognition. INPO has also facilitated active peer review and employee loan programs to break down the insularity of the TMI era. When individuals with in- depth industry experience participate in the inspection process and work at INPO headquarters, they see firsthand the excellence other plants practice and how those practices relate to INPO safety initiatives. The VA hospitals used small group meetings, storytelling, and small rewards to reinforce safety reporting. At the most successful VA hospital we reviewed in 2004, administrators held more than 100 small group meetings where storytelling was used in order to introduce the new SRS. VA hospital administrators used examples from aviation wherein two airline pilots failed to communicate well enough to avoid a fatal crash. The crash might have been avoided had the first officer challenged the captain. This story raised parallels with the medical hierarchy and led to discussions about similar unequal power relationships in the hospital. Administrators introduced more effective ways to challenge authority, naming it “cross-checking.” An early report to the VA SRS, which involved nearly identical packaging for an analgesic and a potentially dangerous drug, was made into a poster as part of the campaign for the SRS. The more successful VA hospitals rewarded the month’s best safety report with a plate of cookies or certificates to the cafeteria. This playful openness reduced secrecy and fears of punishment and increased comfort with reporting, according to our 2004 analysis. After the three industries instituted a voluntary SRS, workers experienced a sharp learning curve in recognizing a reportable event and developing trust in reporting. The industries encouraged early reporting in a variety of ways. Overall, their experiences demonstrate that reporting is enhanced when (1) reportable events are broadly defined and allow reporting from a wide range of workers; (2) workers are able to describe the details of an incident or concern in their own words, with classification schemes applied by specialists at a higher level; and (3) both internal and external reporting options are available, along with some degree of report processing at the local level. In the three case industries, an early challenge was workers’ lack of understanding of what should be reported. In each of the industries, the creation of an SRS involved broadening workers’ concepts of safety events, in addition to accidents, that were worthy of reporting. Nevertheless, early reporting still tended toward accidents and technical issues—accidents because they were fairly evident and harder to hide and technical issues (as opposed to human factors) because the external nature of the fault provided some distance from individual blame. Reporting these technical events helped workers become more comfortable with reporting and provided objective links between their reports and systemic safety improvements, according to several industry officials. Over time, workers’ ability to identify less concrete, but equally unsafe, nontechnical issues grew. The industries managed this growth, in part, by keeping the threshold and definitions for reportable events simple. In some cases, direct reporting—as opposed to reporting hierarchically, up the chain of command—was used to eliminate the fear that workers might have about reporting a mistake to the boss. Open reporting of events from several workers—especially those in different occupations—provided more raw data in the search for underlying causes, as well as information about the event from a variety of perspectives. The ASRS used a broad definition of reportable events and allowed all frontline aviation personnel to report them. Any actual or potential hazard to safe aviation operations are included in reportable events, thus expanding to areas on the risk pyramid beyond “accident.” Serious accidents are not reported to the ASRS, since they are already covered by the NTSB. While reporting is available to all participants in the national aviation system, for several decades, the majority of reports were from pilots. After outreach and initiatives—such as revised specialized forms— the ASRS has in recent years seen modest increases in reports from diverse groups of workers, such as maintenance workers, enhancing the potential for analysis of single incidents from a variety of perspectives. To reduce the loss of information that could occur if reports from frontline workers are filtered through work hierarchies, the ASRS makes it possible for individual aviation workers to report directly to the central collection unit within NASA. Individual nuclear plants operate corrective action reporting programs, which feed into INPO’s SEE-IN system. The plant-level corrective action programs have a zero threshold for reporting—that is, workers can report anything of concern. To make the definition for reporting clear to workers, INPO characterizes the reporting threshold in terms of asking workers to report events that they would want to know about if the event had happened elsewhere. In addition to establishing low reporting thresholds, a broad spectrum of workers are encouraged to report to the plant’s corrective action programs. Open reporting and low reporting thresholds are necessary to ensure the fullest coverage of significant event reporting, according to an INPO liaison. While the individual plants are expected to assess and address the bulk of reports, they must also identify the most significant reports to send to INPO. Plants forward between 3,000 and 4,000 concerns to INPO each year from the estimated 400,000 concerns reported and resolved at the plant level through their corrective action programs. To ensure all staff are encouraged to report any event of interest, INPO examines the robustness of the plant’s reporting culture during biannual plant inspections. As part of this process, INPO also compares corrective action reports to SEE-IN data to determine whether there are reports in the corrective action system that were not forwarded to INPO that should have been. If such discrepancies arise, these cases are discussed with plant managers to educate and clarify the plant’s reporting thresholds to INPO. Prior to the SRS program, VA hospital workers were accustomed to reporting only the most serious events, such as inpatient suicides or wrong-site surgery. The VA SRS program expanded the definition of reportable events to include incidents—such as close calls or errors that caused no patient harm—in recognition of the value of incident data in detecting systemic safety problems. Despite the conceptual shift in reporting expectations, in our 2004 report, we found that 75 percent of clinicians we surveyed at four facilities understood these new reporting requirements. In addition, the SRS program was designed to allow direct reporting from any member of the medical center staff to the patient safety manager. This expansion—beyond the previous expectation that nurses would report to their supervisors—was made in recognition of the power relationships among clinicians that might inhibit reporting. As a patient safety manager noted, the change in reporting expectations was evidenced when a chief surgeon came to report instances of mistaken patient identity in the surgery. In all three industries, delaying the launch of an SRS for development of a formal error classification scheme would have been unpalatable in light of significant pressure to implement solutions following serious events. Further, some safety experts believe rigid early classification of error can limit new knowledge and insights. In the absence of such schemes, the industries allowed reporters to give detailed narrative accounts of the incidents or concerns in their own words. As the industries’ comfort with error terminology develops, some SRSs may encourage reporters to classify certain aspects of events in order to facilitate industrywide analyses. ASRS reports are primarily experiential narratives in the words of the reporters. Although the heavily regulated aviation industry had event definitions for rule enforcement, studies have concluded that the ASRS was begun without a formal classification of errors. The unstructured nature of the narrative reports is an analytic challenge. However, the ASRS has developed a set of 1,200 separate codes that facilitate the analysis of aviation risk. Recent FAA activities are focused on the benefits of an integrated data system for safety events that combines ASRS’s narrative reports and other reporting systems. Understandably, international aviation safety organizations have declared common reporting methods— including terms and forms—best practices. The corrective action reporting programs at each plant collect information as narratives in the workers’ own words. Corrective action reports are reviewed at the plant level by a team of managers and specialists. As part of this review, the team determines what actions, if any should be taken to address the issue, and reports are sorted and some level of classification is applied. Most corrective action reports are dealt with at the plant level. Only reports that rise to a defined level of significance—as determined through the review process—are sent on to INPO. While the reports sent to INPO do maintain narrative description of the event, they also classify specific aspects of the event. INPO further sorts and classifies these reports and produces various levels of industry alerts based on this review. According to a VA official, the SRS program was launched without an error classification system at the reporter level. Considering that even now the science for developing a formula for public reporting is evolving, he noted that the time it would have taken the VA to develop such a system would have delayed the launch by several years. Instead, the classification is done centrally. The VA has maintained this process because it believes that application of an error classification scheme is best done at higher levels by, for example, the patient safety managers. The VA official observed that the Agency for Healthcare Research and Quality (AHRQ) has been working on a set of error terms for nearly 5 years; however, there is, to date, no industrywide agreement on error or adverse event terminology in health care, although one for select health care institutions is under review. The initiation of SRS programs in two industries was driven by urgent circumstances, before there was time to assess workers’ willingness to report. However, while program developers did not know everything about the problem, they did know that existing knowledge about the workforce culture could provide some basis for planning—that is, if employers suspect they have a mistrustful workforce, they can plan for it. In addition, the industries recognized that the value of local-level processing for improving safety culture and awarding responsibility for safety to the frontline was too great to completely give to an outside entity. Therefore, they developed a bilevel process for assessing safety data at both the local and industry levels. The airline industry manages the tension between trust and ownership in SRS reporting by offering a variety of internal and external, as well as local- and industry-level, reporting options. The ASRS (an external reporting option) was originally managed by the FAA, but within a year, it was moved to NASA—an honest broker—because of concerns that reporting directly to the regulator would discourage reporting. While separating the reporting function from regulation encouraged reporting, it may have fostered unconstructive perceptions of the ASRS among some FAA staff. Specifically, the 1994 NAPA evaluation found that FAA workers may not understand the ASRS and, consequently, devalue it. While the ASRS receives reports directly from reporters, the FAA’s Voluntary Safety Programs branch (VSP) launched a bilevel SRS program in which 73 airlines are primarily responsible for receiving and processing reports and implementing solutions. By selecting a private structure for these SRSs, the FAA gets the entity closest to the local context to analyze reports and develop and implement solutions. A selection of the systemic problem reports is transmitted to the FAA’s Aviation Safety Information Analysis and Sharing program, which the FAA uses to develop industrywide guidance and regulations to improve safety. More than 60 percent of reports to the ASRS also appear in the other VSP’s SRSs. In the commercial nuclear power industry, most safety reports—an estimated 400,000 annually—are managed at the plant level, according to an INPO liaison. There is no confidentiality for individual reporters to their plant’s SRS; instead, the reporting system relies on developing an open reporting culture. Each plant is responsible for sorting, analyzing, and implementing corrections for most of the reports to their corrective actions program. The reporter’s identity is not revealed when the more serious events are sent on to INPO. INPO created a bilevel reporting structure because it lacked the resources to handle 400,000 reports annually and because it sought to involve the plants by giving them some ownership of the safety improvement system. However, recognizing the need for an industry-level assessment of safety data, INPO uses the more serious event reports from plants to develop industry alerts and safety recommendations. In the absence of specific information about workers’ trust in reporting to an internal system, the VA could not be certain it had a safety culture that would support open local reporting. However, they knew nurses and pharmacists were “rule followers,” while physicians had more discretion. The VA handled this uncertainty by initiating both internal and external reporting options. One reporting option, which emulated the ASRS model, was designed to enable workers to report directly to NASA—a contracted, external entity—confidentially. After operating both reporting options for nearly 10 years, the NASA-run system was discontinued for budgetary reasons at the end of fiscal year 2009. While the PSIS enables workers to report to an internal entity—the hospital’s patient safety manager—the external NASA option provided more confidentiality and some measure of anonymity; the internal option provides personal contact and confidentiality, but not anonymity. Even with its much lower report volume—about a 1 to 1,000 ratio of reporting for the PSRS compared to the PSIS—for over 8 years, the system contracted to NASA provided a confidential alternative for workers who felt that they could not report to their own hospital, providing a safety valve or insurance policy of sorts. In addition to dual reporting options, the VA also planned for internal and external processing options. The NCPS intended that hospital-level report collection and processing—including root cause analysis and the development of systemic changes—be deliberately assigned to the individual hospitals to give workers on-the-job learning, and we found the experience drove home to clinicians that the SRS was a nonpunitive, solution-developing system. While reports are processed by a higher-level entity, the NCPS, to facilitate identification of issues with systemwide safety implications, local-level processing is also maintained because it provides a sense of ownership and immediacy in solving problems. Each industry we examined grappled with how to balance the regulatory tradition of punishing workers (or entities) for safety events with legal protections and incentives for reporting. Under most current laws, reports generated before an accident are considered discoverable evidence afterwards. Such laws may deter companies from soliciting and collecting reports about safety problems and workers from reporting them. To address these concerns, the three industries offered a variety of mechanisms for protecting and encouraging reporting, including confidentiality provisions, process protections, and reporting incentives. Confidentiality provisions, rather than anonymous reporting, are the most common approach to protecting reporters’ identities because they allow follow-up with the reporters; however, their protections are not ironclad. And, as SRS program managers in some of the industries discovered, even the perception that confidentiality can be, or has been, breached can discourage reporting. In the three industries, most of the laws supporting SRS confidentiality protections are a patchwork of older laws not originally intended to back up an SRS. Most also have exceptions to confidentiality if Congress or law enforcement agencies demand access to the protected documents. Some of the systems rely on existing laws, such as exceptions in the Freedom of Information Act (FOIA); other systems have a legal and regulatory basis crafted for related purposes. As SRS failures in other countries illustrate, legal protections can be strengthened or weakened through legislative action. Recognizing the fragility of confidentiality provisions, the three industries also relied on processes and incentives to protect and encourage reporting. Processes, such as deidentification of reports, support confidentiality provisions. Some industries apply it to both the reporter and the organization or unit involved. Data deidentification at the organizational level supports organizational buy-in for reporting, makes it less likely that reporters will be discouraged from reporting, and facilitates industrywide sharing by removing fear of reprisal. In addition, limited immunity provisions or small rewards were used, in some industries, as incentives to encourage safety reporting, especially in environments of mistrust. Limited immunity provisions apply when certain requirements— such as timely reporting—are met. These provisions provide reporters (individuals or organizations) with a means for avoiding or mitigating civil or regulatory penalties. With respect to rewards, even seemingly small incentives can be effective in promoting trust in reporting. The FAA protects its reporters through a combination confidentiality and limited immunity, relying on regulation, policy statements, and procedural or structural arrangements. For the much older ASRS, confidentiality is maintained both as part of the interagency agreement between NASA and the FAA and through procedural efforts, such as deidentification of reports, as well as regulation. Section 91.25 of the Federal Aviation Regulations prohibit the FAA from using information obtained solely from these ASRS reports in enforcement actions against reporters unless criminal actions or accidents are involved. Specifically, after following up with the reporter and analyzing the report, the NASA office removes information that could identify the reporter, including the reporter’s name, the facility, airline, or the airport. NASA destroys the identity portions of the original reports so that no legal demands could reveal them. The ASRS’s information processing and deidentification of reports has ensured the confidentiality of its reports for over 30 years, despite pressures from the regulator and outside entities to reveal them. To strengthen the confidentiality agreement between the FAA and NASA, the FAA has determined by regulation that it will generally not use reports submitted to NASA in enforcement actions and provides some disciplinary immunity for pilots involved in errors. In contrast, for several of the carrier-run SRSs initiated since 1997, reports are protected from legal enforcement action by the FAA only by policy. However, despite the combined legal and procedural bases for protecting aviation SRS data—for both the ASRS and the other SRSs the FAA supports—there are pressures to violate SRS confidentiality. After recent judicial decisions forced disclosures from an SRS managed by the VSP branch, four major airlines withdrew from a voluntary program but have since rejoined. INPO operates under considerable confidentiality and maintains the ability to withstand legal challenges. Protecting the confidentiality of plants was central to the inception of INPO’s safety efforts, according to industry officials. While guaranteeing its member utilities confidentiality similar to that in a doctor-patient relationship, INPO has also cultivated an open questioning attitude as the wellspring of safety reporting. While individual reporters receive no confidentiality, the reporting system relies on developing an open reporting culture. Under an INPO-NRC Memorandum of Agreement, reports and information that INPO makes available to the NRC will be treated as proprietary commercial information and will not be publicly disclosed. INPO maintains legal resources for future confidentiality challenges. In INPO’s bilevel system, reports sent to INPO do not identify the reporter, and INPO’s confidentiality includes carefully guarding the identity of individual plants or utilities. For example, INPO does not reveal plants’ safety scores. NRC officials reported that their process also guards against release of INPO information, such as looking at INPO’s reports but not taking possession of them. Plants’ interests in avoiding negative consequences also serve as an incentive for reporting. In particular, plants’ fear of exclusion from INPO and interest in avoiding negative comparisons to other plants are tools the industry uses to promote reporting and workplace safety. An industry reality is that U.S. nuclear power plants are “hostages of each other,” in that poor safety on the part of one plant could damage the entire industry’s future. In addition, the NRC and insurers would be made aware of a plant’s exclusion from INPO, leading to increased insurance costs, as well as a loss of accreditation for training programs, which would result in more regulatory involvement by the NRC. The NRC and INPO identified other incentives that encourage nuclear plants in their current safety efforts, including (1) NRC credit on penalties if a plant identifies and corrects its own accident precursors, (2) the high cost of corrections, (3) the negative effect of safety events on stock values, (4) the loss of public confidence, and (5) insurance rates. The confidentiality of the SRS records that the VA hospital administration maintains is protected from disclosure by 38 U.S.C. § 5705—a law that predated the establishment of the SRS by over 15 years. This law prohibits the disclosure of records that are part of programs to improve the quality of health care. Sanctions, including monetary fines, are attached to disclosure violations, but there are exceptions to the confidentiality of the records, including demands by law enforcement agencies or Congress. More recently, the Patient Safety and Quality Improvement Act of 2005 provided similar confidentiality provisions, including fines for disclosure, for voluntarily submitted SRS-related documents from all U.S. hospitals. The bilevel structure of the VA’s internal SRS facilitates deidentification. Individual hospitals collect and analyze reports and develop systemic fixes for their own hospital. Subsequently, the hospital sends reports and analyses—which are stripped of information that could identify individuals—to the central NCPS. The external, NASA-run SRS also deidentified reports. In addition, NASA destroyed the identification section of original reports in a process similar to that used for ASRS reports. The VA does not grant immunity for intentionally unsafe acts or criminal behavior, nor does the safety program replace VA’s existing accountability systems. However, individual facilities have used rewards as incentives, such as cafeteria coupons or cookies, to encourage reporting. In addition, hospital-level awards, such as awards to VA Medical Center directors from the NCPS, have also been used to encourage their support for reporting, analyzing selected reports in a timely way, and following up to mitigate risks identified in their reports and analyses. While some of the SRSs in the three industries have local-level processes for analyzing safety reports, they also have a central, industry-level entity that collects, analyzes, and disseminates safety data and makes recommendations. These industry-level entities facilitate feedback and evaluation by (1) elevating facility-level safety data to industrywide lessons and disseminating them across the industry, including internationally, and (2) assessing safety culture and identifying units or worker subgroups in need of outreach or intervention. Some industry SRSs offer direct reporting to a central, industry-level entity, which is responsible for processing, analysis, and dissemination. For others, reporting takes place at the local level. While some level of report processing, analysis, and dissemination takes place at these local facilities, full or deidentified safety data are sent to a central, industry-level entity. Sending reports up to a central entity ensures that safety fixes identified through local processes are not lost to the rest of the industry. At the same time, local analysis and feedback can demonstrate the system’s value to workers and reinforce reporting. Because the central entity receives safety data from multiple organizations—whether through direct reporting or from local-level systems—the volume and variety of information increase the potential for identifying systemic issues and improving safety industrywide. In addition, the industries recognize that a central, industry-level entity might be necessary for bringing some difficult safety problems to light. This is because the central entity is more likely to consider the interests of the industry, whereas local-level managers might resist identifying systemic issues that would put personal or organizational interests at risk. These central entities, because of their position as industry representatives, are also in a better position to disseminate lessons across the industry and internationally. They provide a single source for industrywide notices of varying urgency, regular online newsletters, policy changes, briefings, and data systems. In addition, some of these entities have staff with internationally recognized safety experts— expertise which has been leveraged worldwide to inform international safety recommendations and SRS design. The central, industry-level entities are also in a better position to facilitate evaluation, including safety culture assessment; identification of reporting gaps (access to safety data from across the industry offers the potential for analysis of gaps across particular locations, organizations, or occupations); and needed system modifications. Furthermore, such entities often have access to other safety data, such as inspection information. This information can be compared with reporting data in order to identify sites in need of outreach and training. Such systemwide visibility provides an ideal position from which to conduct SRS evaluations. Industry experts we spoke with believe that their industries are safer, in part, as a result of their SRS programs. In limited cases, the central entities have been able to conduct evaluations or use performance metrics to assess safety culture improvements and the role of the SRS in those efforts, as is recommended under the Government Performance and Results Act. The ASRS shares lessons with all levels of the domestic aviation community and has served as a model of aviation safety reporting worldwide. NASA’s ASRS issues a series of industrywide notices based on ASRS reports, which are graded on the basis of the urgency and importance of identified safety issues, and it has been recognized worldwide as a model for collecting data from frontline workers. NASA provides “alerting” messages to the FAA and the airlines on safety issues that require immediate attention. NASA also disseminates ASRS information via a monthly online bulletin, CALLBACK, to 85,000 members of the aviation community on safety topics such as summaries of research that have been conducted on ASRS data. Unions and airlines use this information in safety training. Among the SRSs we are aware of, only the ASRS offers access to its event database for outside researchers to conduct analysis and for ASRS staff to perform specially requested analyses for the FAA, NTSB, and others. The FAA also maintains an industry-level office—the VSP branch—which oversees seven different voluntary reporting systems, including the ASRS. Data from these SRSs provide information on events that would otherwise be unknown to FAA or others, and VSP’s role is to facilitate sharing of these data at the airline and industry levels. We observed VSP and ASRS staff representing U.S. airline safety interests at an international aviation safety reporting meeting to share lessons on aviation safety and SRS design and implementation. Such participation offers opportunities for safety improvement in aviation worldwide. For example, VSP and ASRS staff have supported efforts to develop safety reporting systems worldwide because aviation safety does not stop at the U.S. border. Most foreign aviation SRSs have been based on the ASRS model. The international aviation safety organization, the International Civil Aviation Organization, has called for each country to have an independent aviation safety reporting system similar to ASRS. Despite the benefits of these SRSs, formal evaluation has provided insights for system improvement. For example, the FAA requested the NAPA evaluation of ASRS, which recommended the ASRS modernize by implementing actions, such as collecting and disseminating reports in electronic formats to better meet the needs of the aviation community. Currently, ASRS safety reports and monthly newsletters are primarily transmitted by e-mail. In addition to ASRS-specific evaluations, the FAA has access to more investigations of aviation safety culture conducted over the last decade. For example, special studies of aviation specialists, such as controllers and maintenance workers, have identified reasons for their lower reporting rates. These studies revealed specific aspects of cultures in these professions that would discourage reporting. For example, controllers were highly focused on bureaucratic boundaries that enabled them to define away—rather than report—unsafe conditions they perceived to be outside their responsibility. Alternatively, according to FAA officials, they found a strongly punitive culture among maintenance workers that led workers to assume that if a supervisor told them to violate a rule, it did not create an unsafe—and hence reportable— condition. These studies made possible targeted efforts, such as a reporting program just for controllers, that resulted in a growing proportion of safety reports from nonpilots. INPO’s lesson-sharing program uses the Nuclear Network—an industry intranet—for sharing safety information. This network houses event data that plants can access and is a platform for INPO to disseminate alerts. Information transmitted via this system includes Significant Operating Event Reports—the highest-level alert document—as well as experiential and nuclear technical information. Plants can also use the network to ask questions or make comments that can be sent to one, several, or all users. Apart from the direct feedback reporters receive from the plant, the key to getting workers to participate in reporting was through seeing—via the Nuclear Network—the corrective actions developed in response to reports they had made, according to the INPO liaison. INPO is seen as a model for other national and supranational nuclear safety organizations, such as the World Association of Nuclear Operators, an organization representing the global nuclear community. As such, INPO has recently begun to participate in the Convention on Nuclear Safety, a triannual international commercial nuclear safety effort. INPO also evaluates plants’ safety improvement programs, although the evaluations are generally not publicly available, according to an INPO liaison. INPO performs a type of “gap analysis” at the biannual on-site plant inspections and conducts safety culture surveys with a sample of staff before each. Reporting gaps are evaluated at the plant level (not by occupation or work group) by looking for reductions in report volume and mining the plant’s corrective action reports. A reduction in reporting year to year is interpreted as an indicator of a potential problem rather than an improvement in safety conditions, because such reductions can indicate a lack of management support for reporting. In addition, if a plant receives a low safety score as a result of inspection findings, INPO provides extra attention and assistance by assigning a team of industry experts to engage in weekly consultations with plant directors, review corrective actions, discuss plant needs, develop solutions, and provide peer assistance and accompaniment to seminars. In its position as the industry-level entity responsible for the VA SRS, NCPS creates and disseminates key policy changes to the VA health care system in response to trends identified from patient safety reports. For example, the NCPS (1) designed and implemented a program that promotes checklist-driven pre- and postsurgical briefings that, according to the SRS program director, have been associated with reduced surgical mortality across the VA hospital system and (2) developed new requirements for CO2 detectors on every crash cart for checking safe intubations outside of operating room settings. The NCPS has played a role in disseminating its SRS model and tools for safety improvement to other U.S. states and federal agencies, including the AHRQ. Specifically, the NCPS provided training to all 50 states and the District of Columbia via the Patient Safety Improvement Corps, a program funded by the AHRQ. The VA -supplied state training contributed heavily toward building a common national infrastructure to support implementation of effective patient safety practices. Further, after attending the VA seminars, several foreign countries implementing their own SRSs have adopted tools developed by the VA. The NCPS has also conducted evaluations of the SRS program, which have provided information for SRS and safety culture improvements. For example, in 2008, the NCPS published a study of the effectiveness of actions hospitals developed in response to SRS reports of adverse drug events. They found that changes in clinical care at the bedside—such as double- checking high-risk medications—and improvements to computers and equipment were effective solutions, but training was not. In addition NCPS has conducted three safety culture surveys, the most recent of which enabled identification of safety culture differences among staff subgroups in order to target outreach and training. To support future evaluations of this kind, the NCPS established several criteria to assess the quality of local-level processes for reporting, analysis, and safety improvement. The CDC and APHIS Select Agent Program (SAP) has taken steps to improve reporting and enhance the usefulness of the theft, loss, and release (TLR) reporting system as a safety tool. Additional steps to improve the TLR system, as suggested by the literature and case studies, include increased awareness of the culture in biological labs and improvements in the three key areas—reporting and analysis, protections and incentives, and feedback mechanisms. See appendix II for a summary of lessons derived from the literature and case studies that can be applied to the TLR system. Recognizing the usefulness of the TLR system as a safety tool, the CDC and APHIS SAP has dedicated resources to manage the system. The TLR reporting system for select agents was developed in 2002, after the 2001 anthrax attacks. As the number and types of reported incidents increased, an outcome of the new reporting requirements, the agencies implemented processes to utilize the TLR system as a tool to manage the Select Agent Program. In addition, the CDC reassessed its administration of the system to consider how it could be used as a safety tool, rather than just a recording system. To its credit, the CDC employed a safety science expert to manage the TLR reporting system and is now exploring ways of using the TLR data to identify systemic safety issues. APHIS has also utilized the TLR as a tool to identify trends such as (1) gaps in administrative oversight of personnel and training and (2) weaknesses in safety and security policies and procedures in regulated entities. Each TLR is reviewed by a compliance officer, security manager, and subject matter experts to identify trends and areas of concern. Identified issues are subsequently discussed with the reporting facility’s senior management, with additional monitoring and inspections as needed. The CDC and APHIS also rely on periodic on-site lab inspections to get an understanding of the culture, with respect to safety and reporting, and identify areas for outreach and training. The agencies inspect labs to ensure that they are in compliance with the safety, security, training, and record-keeping provisions outlined in the regulations. As part of this process, the agencies use checklists developed from regulations and nationally recognized safety standards to review laboratory safety and security and to develop observations. In addition, the agencies interview lab staff and examine documentation, such as medical surveillance documents, exposure or incident records, and minutes from Institutional Biosafety Committee meetings. Review of such documentation can provide an indication of possible incidents with select agents or toxins. During these inspections, the CDC and APHIS officials seek to (1) identify gaps in knowledge about safety and reporting and (2) report on areas needing improvement. The information the agencies derive from these inspections and from TLR reports can provide useful information about the culture of safety and reporting within labs. However, lessons from the literature also suggest that systematic assessment of the culture, such as through ongoing surveys or studies, can provide invaluable information about how the specific working environment can affect perceptions of safety and reporting requirements. These perceptions—and variations, for example, within or across working environments or occupations—can affect what is considered a reportable event; feelings of responsibility for or fear of reporting; and the value of reporting safety events. For example, studies examining the effects of culture on safety and reporting in the aviation and health care industries have found that perceived occupational hierarchies, such as between doctors and nurses or pilots and cabin crew; authority structures; organizational factors; concepts of justice; and other factors can affect safety and reporting. According to CDC and APHIS officials, they have no plans to arrive at such an awareness through cultural assessment. Nevertheless, agency officials agree that culture matters when it comes to safety and reporting. For example, they noted that culture may differ by a lab’s size and level of resources. Larger labs or labs with more resources tend to have better safety and reporting. Other agency officials noted that, based on career experiences, they have become aware of safety differences across different types or levels of labs. According to a CDC official, staff in higher-level labs, such as BSL-4 labs, have recognized the danger of the material they are working with. These facilities are also more likely to have biosafety officers, whose presence, according to the CDC official, tends to make workers more conscientious about safety. Another official noted that, while you might find sandwiches or soda in the refrigerator of a BSL-2 lab, these items would never be found in BSL-4 labs. Safety culture differences between clinical and research labs were also noted by CDC officials. Such variation in culture across labs was also noted by domestic and international biosafety specialists we spoke with. Despite recognition of such variation across labs, officials stated, the CDC does not have a unified position on the issue, and the research does not exist to definitively establish safety culture differences by lab type, occupation, or sector. Greater awareness of cultural influences and how they affect safety and reporting in the labs could (1) help the agencies better target outreach and training efforts and (2) provide insights into whether reporting system design and implementation changes are needed to address lab variations in safety and reporting. The CDC and APHIS SAP has taken steps to better define reportable events, which can increase the likelihood that workers will report when required. For example, in early 2008, the CDC and APHIS published the Select Agents and Toxins Theft, Loss and Release Information Document, which includes detailed scenarios on what and when to report. Since the TLR reporting program was established in 2002, the agencies have seen reports increase substantially; since a 2008 initiative to better inform the lab community of incident-reporting requirements, the CDC and APHIS noted that they receive approximately 130 incident reports per year. The types of labs reporting have also broadened. According to the CDC, the increased reporting is the result of better awareness of and compliance with reporting requirements, rather than an increase in thefts, losses, or releases. Indeed, of the reported TLRs, there have been no confirmed thefts, one loss, and only eight confirmed releases. To clarify reportable events, the Select Agent Regulations require that the individual or entity immediately notify the CDC or APHIS upon discovery of a release of an agent or toxin causing occupational exposure, or release of a select agent or toxin outside of the primary barriers of the biocontainment area. The agencies’ Select Agents and Toxins Theft, Loss and Release Information Document further clarifies reportable events. The document defines a release as a discharge of a select agent or toxin outside the primary containment barrier due to a failure in the containment system, an accidental spill, occupational exposure, or a theft. Furthermore, any incident that results in the activation of medical surveillance or treatment should also be reported as a release. The document also emphasizes that occupational exposure includes any event in which a person in a registered facility or lab is not appropriately protected in the presence of an agent or toxin. For example, a sharp injury from a needle being used in select agent or toxin work would be considered an occupational exposure. While these reporting requirements are fairly broad, they do require a degree of certainty about the occurrence of an event. But, in some cases, recognition of a reportable event may come only when consequences are realized. While the agencies’ steps to better define reportable events can increase the likelihood that recognized events will be reported, according to the literature and biosafety specialists, lab workers are often unaware that a release has occurred unless or until they become sick. For example, early studies of LAIs found that as many as 80 percent of all reported LAIs could not be traced back to a particular lab incident. A more recent study found similar results. The absence of clear evidence of the means of transmission in most documented LAIs highlights the importance of being able to recognize potential hazards because the likely cause of these LAIs is often unobserved. While a great deal is known about micro-organisms to support safe lab practices, microbiology is a dynamic and evolving field. New infectious agents have emerged, and work with these agents has expanded. In addition, while technological improvements have enhanced safety, they can also introduce new safety challenges. For example, failures in a lab system designed to filter aerosols led to a recent company recall of this system. The dynamic nature of the field, coupled with the difficulty of identifying causal incidents in LAIs, suggests substantial potential for unintentional under-reporting. In such an environment— where workers are waiting for an obvious event to occur before reporting—a significant amount of important, reportable safety information could be lost. Consequently, while reporting requirements for releases may now be clear for many incidents or for observed consequences, broader reporting thresholds may be necessary to accommodate emerging safety issues and the unobserved nature of many LAI events. According to lessons from the literature and case studies, expanding reporting thresholds—in this case, to include observed or suspected hazards—can help capture valuable information for accident prevention. The industries in the case studies all struggled with how to recognize, and thus report, such events. However, over time, the feedback they received from these reports, in the form of specific safety improvements, helped workers develop familiarity and comfort with recognizing and reporting such events. An example in the lab community might be the practice of mouth pipetting, drawing an agent into a pipette by sucking on one end. At one time, mouth pipetting was a common practice, despite the high risk of exposure. Even though not every instance resulted in exposure or an LAI, some did, and eventually the activity was recognized as a potential hazard—an accident precursor. Expanding the TLR reporting threshold to include hazards could provide additional data that might be useful for safety improvement efforts. For example, INPO encourages reporting of events at all levels of the risk pyramid—including the hazard level—for the corrective actions reporting programs of nuclear power plants. This level of reporting ensures as complete coverage as possible of potential safety issues. For the TLR, reporting at this level could be voluntary or mandatory. Moreover, until a labwide voluntary reporting system is implemented, reporting at this level could further develop the reporting culture among select agent labs. The CDC and APHIS SAP has taken steps to incorporate deidentification measures to further protect the confidentiality of entities reporting thefts, losses, or releases. While entity-specific information is protected from release under FOIA, there was an instance when specific entity information was somehow leaked to the media after the CDC provided the data in response to a congressional request. As a result, the agency provides only deidentified report forms in response to congressional requests. In addition, to further support reporter confidentiality in the event of audit or congressional requests to view TLR information, the CDC has established an access-controlled reading room for viewing these reports. It expects these measures to prevent any future prohibited disclosure of entity-specific data, while special-need access to information about thefts, losses, or releases is provided. According to lessons from the literature and case studies, even the perception of a confidentiality breach can quash reporting. Consequently, the agencies’ measures to ensure confidentiality can increase confidence in reporting. Apart from the requirement to report, labs also have some incentive for reporting. One such incentive, according to CDC officials, is labs’ interest in avoiding increased oversight. In addition, lab officials know that (1) select agents are on the list because they are dangerous and (2) it is of critical importance to promptly report incidents to ensure proper care of workers and the public. CDC officials stated, however, that too much discretion about what and when to report could result in the under- reporting of more serious events. As the experiences of the case industries illustrate, protection of reporter confidentiality is an ongoing effort, even when strong legislative provisions exist to protect reporters’ identities. Because, as mentioned above, even the perception of a confidentiality breach can quash reporting, strong incentives for reporting—such as limited immunity provisions—can balance these fears and encourage continued reporting, according to lessons from the literature and case studies. If the CDC or APHIS discovers possible violations of the select agent regulations, the following types of enforcement actions may occur: (1) administrative actions, including denial of application or suspension or revocation of certificate of registration, (2) civil money penalties or criminal enforcement, and (3) referral to the Department of Justice for further investigation or prosecution. Currently, even if entities report violations, there are no provisions for receiving immunity from these enforcement actions. In the aviation industry, pilots face the possibility of similar enforcement actions for violations of regulations. However, the FAA provides some disciplinary immunity for pilots reporting violations of regulations to ASRS. Such immunity is in recognition of the fact that (1) information about pilots’ errors is essential for identification of systemic problems and (2) pilots would be unlikely to report their errors without some incentive to do so. Similar provisions for limited immunity from administrative action or reduced monetary penalty could be offered to labs for some violations of select agent regulations. Although the CDC and APHIS have not yet explored this option, such an incentive could be a powerful tool for ensuring reporting compliance. The CDC and APHIS are uniquely positioned to support feedback and evaluation efforts that are based on TLR information. The agencies’ oversight responsibilities for registered labs and their recognized expertise in laboratory safety practices provides them visibility and authority across the lab community. Such a position, according to lessons from the literature and case studies, is ideal for (1) disseminating feedback from SRSs and (2) evaluating the effectiveness of the reporting program. Currently, the agencies have a process for providing feedback to the reporting institution, and are beginning to explore avenues for sharing safety lessons across the labs and internationally. In addition, the CDC has begun using the data to develop lessons learned from reported information. Although deidentified reports are not available to the general public, they are being used for special research studies sponsored by the Select Agent Program. For example, information from deidentified reports has been used for conferences such as the yearly Select Agent Workshops, sponsored by the CDC, APHIS, and the Federal Bureau of Investigation. The agencies are also analyzing data on select agent release reports and plan to publish the findings in a publicly available, peer-reviewed journal. Such feedback demonstrates the value of reporting, according to lessons from the literature and case studies. Lessons from the case studies also indicate that using SRS data to develop guidance and sharing such information internationally can support industrywide safety improvement efforts. For example, TLR data could provide valuable information for updates to the BMBL and World Health Organization guidelines, which can benefit the worldwide lab community. When a lab reports a TLR, the CDC or APHIS provides feedback and, if necessary, follows up to determine the root cause or initiate surveillance. While the CDC recognizes the usefulness of TLR reports for generating data that can (1) help spot trends, (2) highlight areas for performance improvement, and (3) show limitations in current procedures, it is just beginning to collect enough data to see patterns of nonreporting, according to CDC officials. The CDC expects that in the future, it will have collected enough data, including inspection data, to identify reporting patterns and conduct targeted outreach to nonreporting labs. However, the agencies do not yet have a specific plan to identify reporting gaps in order to develop targeted outreach and training or to assess the system’s effectiveness. To further support targeted outreach, as well as system modification, evaluation is needed. As we have previously reported, such evaluation can be a potentially critical source of information for assessing the effectiveness of strategies and the implementation of programs. Evaluation can also help ensure that goals are reasonable, strategies for achieving goals are effective, and corrective actions are taken in program implementation. For example, an evaluation of the ASRS program revealed the need to improve the usefulness of the system through system modifications and increased outreach to certain populations. According to CDC Select Agent Program officials, they have had general reviews, such as an HHS Office of Inspector General review and a federally funded, third-party review of procedures conducted by Homeland Security. However, these reviews did not focus on the effectiveness of the TLR reporting system. Safety reporting system evaluation literature and case studies of SRSs in three U.S. industries—aviation, commercial nuclear power, and health care—provide lessons for design and implementation considerations for a national biological lab SRS. First among these lessons is the need to set system goals and assess organizational culture, as illustrated in figure 4. However, assessment of organizational culture is difficult in the context of U.S. biological labs because there is an unknown number of labs and, except for labs in the Select Agent Program, no entity is responsible for overseeing all labs. While many federal agencies have labs and are involved in the industry, no single regulatory body has the clear responsibility or directive for the safety of all laboratories. Consequently, an important part of the goal-setting and assessment process for a biological lab SRS is determining the scope of labs to which the system would apply. For example, specific system goals, such as the ability to identify trends or incidence rates, may be possible with one type or level of lab, but not another. Similarly, assessment may reveal that differences in organizational cultures across lab types is so significant that appropriate SRS features for one type of lab would not apply well to another. Consequently, the scope of labs to which an SRS might apply could be addressed as part of the goal-setting and assessment process. Until such a goal-setting and assessment process is completed, design and implementation options in the three key areas—reporting and analysis, reporter protections and incentives, and feedback mechanisms—can be considered in the context of available information on organizational culture in biological labs and potential goals for a biological lab SRS. In particular, the following can provide some context to guide early decisions for the design and implementation of an SRS for the lab community: biosafety research, experiences with the TLR reporting system and biosafety specialists’ perspectives. Such context can be further refined once assessment and stakeholder input are obtained. In addition, the NIH has begun developing a prototype reporting system for a subset of its intramural research labs. Lessons from how this prototype system works for a subset of labs could also inform design and implementation considerations for a national biological lab reporting system. Existing information about the potential goals for a biological lab SRS and the organizational culture of these labs suggest certain design and implementation features in the first key area: reporting and analysis. Figure 5 shows the relationship of program goals and organizational culture to this key area. The level of event of interest, probable SRS goals, and organizational culture all suggest voluntary reporting for a biological lab SRS. While the TLR reporting system for select agents is focused on incidents or accidents that pose the greatest danger to workers and the public, an SRS for nonselect agents could be used to gather information on hazards and potentially less serious incidents and accidents in order to collect precursor data. Systems that focus on less serious events and that collect precursor data to support learning rather than enforcement goals are generally associated with voluntary reporting, according to lessons learned. Voluntary reporting for a biological lab SRS also corresponds with the views of biosafety specialists we spoke with. Reporting to an SRS—especially for incidents beyond LAIs or the theft, loss, or release of select agents—would be relatively new to the lab community. And although select agent labs have become familiar with reporting theft, loss, or release incidents, previous reporting failures indicate that, even among this subset of labs, reportable events may still be unclear. In such situations, allowing workers to report events in their own words, rather than asking them to classify the event as a certain type of hazard or error in order to report, can facilitate reporting. Classifying events—that is, applying standardized descriptions of accidents, incidents, and hazards—can facilitate safety improvement across the industry by providing a common language for understanding safety events. But classification can also limit reporting if workers are unsure of how to apply it. One solution for industries new to SRS reporting is to apply classification at a higher level, for example, through the event review or analysis process. Ensuring the reporting process is as clear and simple as possible is especially important for the lab community. Although LAIs are widely recognized as under-reported, there is, at least, a long history of reporting these events among lab workers. However, lab workers do not have as much experience reporting events without an obvious outcome, such as an LAI. Many of the biosafety specialists we spoke with had difficulty envisioning the types of events—apart from LAIs—that might be reportable. In addition, even when LAIs do occur, many are never linked with a specific causative incident, so information about potential event precursors is never communicated or is difficult to identify. Difficulty recognizing exposure is a reality of work in these labs. LAIs often occur through aerosol exposure, and the activities that can create such conditions are numerous. However, all three case-study industries grappled with similar difficulties in recognizing and reporting events that did not result in obviously negative outcomes. One way the industries addressed this difficulty was to allow workers to report a broad range of events in their own words. Over time, as workers saw concrete results from their reports, such as improved processes or guidance, their ability to identify less concrete, but equally unsafe hazards and incidents—even those without obvious consequences—grew. Expecting lab workers to classify events in order to report them would likely limit reporting. In such situations, lessons learned suggest allowing workers to report events in their own words to facilitate reporting. The lab community is organizationally diverse and the population of labs is unknown. Opening reporting to all workers, and offering multiple reporting modes (e.g., Web and postal), and using forms with open- question formats that allow workers to report events in their own words can facilitate reporting in the face of such uncertainty, according to lessons from the literature and case studies. Biological labs operate across a wide range of employment sectors, locations, and levels of containment. There are BSL-2, 3, and 4 labs in private, academic, and public settings across the United States. Staffing models for these labs are likely as different as the lab populations. Safety culture and reporting proclivity also vary across lab types. For example, according to biosafety specialists, clinical and academic labs—in contrast to government and private labs— face greater challenges to creating a safety culture and reporting events. According to one biosafety specialist, in academic labs, students expected to complete lab work before they have received adequate safety training may not feel they are in a position to demand such training. Specialists also indicate that higher-level labs (BSL-3 and 4)—especially the larger ones with better resources—have personnel, equipment, and/or processes to better support safety culture than lower-level, smaller labs with fewer resources. Furthermore, the consequences of accidents are so great at higher-level labs that the culture is generally more cautious. At lower-level labs, the perception of risk and actual risk are lower, so practices are not as stringent as they would be at higher-level ones. The work environment at biological labs also varies. In particular, some work is done in teams and some individually, and some is completed overnight because of time-sensitive experiments in the research. In addition, the solo nature of much lab research means that a single lab worker may be the only one who knows about an incident. For lab work, the external visibility of accidents and incidents present in aviation or some areas of health care may not exist. Bioresearch errors are also a lot harder to spot than errors in other industries. For example, nuclear safety officers can use radiation detectors to determine whether breaches of protocol have occurred by identifying hot spots in suspicious areas, such as a phone outside the lab. No similar tracking mechanism exists for bioresearch. Therefore, the only objective proof of most accidents is that someone became ill. In addition, lab workers have little incentive to report if the incident occurred as a result of their own error, according to biosafety specialists. Although one specialist believes there is a fair degree of reporting on equipment failures because researchers generally want to ensure that the equipment is fixed. Such variation has consequences for reporting. According to lessons from the literature and case studies, assessments can provide information about aspects of organizational cultures, structures, or processes that can affect reporting. However, a comprehensive assessment of this sort is difficult because (1) the population of labs is unknown and (2) no entity is responsible for conducting such an assessment. Given the uncertainty about cultural influences that may affect reporting behavior, more inclusive reporting options can facilitate reporting, according to lessons from the literature and case studies. For example, uncertainty about lab workers’ access to reporting forms or ability to complete detailed forms can be minimized if (1) workers can report in whichever mode is most accessible to them (Web or postal) and (2) the forms do not require overly detailed or technical explanations. In an environment where much of the work is done alone and incentives may not exist for reporting, an SRS that is open to all lab workers (including security and janitorial staff) can facilitate reporting where none might occur. Accepting reports from workers not directly involved in research can increase the volume of safety data that can be obtained. Multimode and open-reporting formats, as suggested above, support open reporting since staff with varying knowledge of biosafety terms—such as janitorial, security, or animal care staff—are still able to report incidents or hazards in their own words in the way that is most convenient to them. Historically, the preferred model of biosafety reporting is hierarchical. This ensures that workers receive timely medical intervention and surveillance. Although it is important that workers have a mechanism for receiving immediate medical attention and surveillance when needed, a lot of important safety information could be lost if only supervisors or managers are allowed to report. Hierarchical reporting structures may limit the amount of useful safety data that can be received because a filtering process takes place at each level in the reporting hierarchy. As the information moves up the reporting structure, each person assesses whether the event is reportable. If the person decides that it is, he or she will report his or her own interpretation of events. Allowing all workers to directly report to an SRS removes this filter and can increase the number of reports and the amount of information collected from reports. For example, reports from multiple sources can enable analysis of events from multiple perspectives. While workers should always be encouraged to report potential exposures and other hazards to their supervisors so that they can receive timely medical attention, they should also be able to report incidents directly to an SRS. The HHS and USDA—as central, recognized authorities in the biological lab community—represent the kind of industry-level entities that, according to lessons learned, are necessary for effective dissemination and evaluation activities. However, the agencies’ regulatory role in the Select Agent Program could inhibit voluntary reporting, suggesting that an alternative reporting mechanism may be necessary. According to lessons from the case studies, dual reporting options can facilitate reporting in such situations. For example, if workers are concerned about reporting safety events—either to an internally managed SRS or to the regulator—an external, independently managed SRS can be useful. Alternatively, if workers are comfortable reporting to a local SRS, these programs can be very effective when the information from local systems is fed to a central, industry-level entity that can analyze data across the industry and disseminate safety improvements industrywide. While each case study industry differs in its approach, all three rely on dual (or multiple) reporting options. Specifically, the FAA relies on the independently run ASRS, as well as seven other key reporting programs, to collect safety data. Events that meet reporting requirements can be reported to the ASRS—meeting the need for an independent reporting mechanism for those concerned about reporting to either their local (airline-run) SRSs or to the regulator. In addition, as part of the FAA’s other reporting programs, the FAA receives SRS data from the airlines, which they use to develop industrywide safety improvements. The commercial nuclear power industry also has reporting options. While each plant has a reporting system for corrective actions, a portion of the more significant reports are passed on to INPO for development of industrywide safety improvements. Individuals and plants also have the option to report to NRC’s Allegation Program. Finally, in designing its reporting program, the VA created two reporting options—one externally managed by NASA and one local, hospital-based program in which safety data are sent on to VA’s National Center for Patient Safety (NCPS) for development of industrywide safety improvements. While the industries might encourage workers to use one option over another, they are still able to report to the system most comfortable for them. Both options, however, utilize an entity with industrywide visibility and recognized authority to disseminate SRS information and direct system evaluations. An external, independently managed SRS for the lab community offers several advantages, including the (1) potential to reduce workers’ fear of being punished for reporting, (2) ability to contract for system management, and (3) centralization of safety data. Nevertheless, since the individual labs have the most intimate knowledge of staff, pathogens, and operations, several biosafety specialists adamantly indicated that the lab facility was the appropriate level for reporting and analysis. According to lessons from the literature, as well as the perspectives of biosafety specialists, analysis of safety reports should be done by qualified biosafety professionals and others with appropriate expertise or knowledge. In addition, processes for local-level collection and analysis of SRS reports can facilitate worker buy-in for reporting, according to lessons from the case studies. However, not all labs have the same resources for collecting and analyzing reports. Furthermore, the focus on safety culture across the lab community may not be sufficient to support an SRS program that operates only at the local level. But local-level support—as well as encouragement of reporting, receptivity to safety concerns, and regard for the field of biosafety—is central to a robust reporting program. Even if there is receptivity to biosafety issues, when safety is the responsibility of those internal to the organization, there may be conflicts of interest in addressing safety issues. While safety improvements are most useful when shared across the lab community, sharing this information may raise institutional concerns about funding streams, public perception of the institution, and professional standing of lab workers, according to biosafety specialists we spoke with. Given the advantages and disadvantages of SRS administration at both the local and agency levels, dual reporting options may be necessary, at least initially. For example, the VA initiated its safety reporting program with both internal and external options. Although the VA canceled the NASA- run program after nearly 10 years, in recognition of the importance of an external reporting option, some efforts to reestablish the system continue. Existing information about the potential goals for a biological lab SRS and the organizational culture of these labs suggest certain design and implementation features in the second key area: reporter protections and incentives. Figure 6 shows the relationship of program goals and organizational culture to this key area. Voluntary reporting to an SRS—especially of incidents that do not result in LAIs—would be a new expectation for some lab workers. As mentioned earlier, even the perception of a confidentiality breach can quash reporting. And given that entity information from the TLR reporting system was leaked to the press, lab workers might have reason for concern about reporting similar incidents to a voluntary system. In addition, the literature and biosafety specialists noted, confidentiality concerns are among the barriers SRS managers will face in implementing a successful reporting program. Therefore, concerns about confidentiality suggest that a biological lab SRS will require strong confidentiality protections, data deidentification processes, and other incentives to encourage reporting, according to lessons learned. In addition, while the literature suggests anonymous reporting as one solution for minimizing confidentiality concerns, it is not an ideal one here. The complexity of biosafety issues would require a mechanism for follow-up with the worker or reporting entity because interpretation of the incident from a written report can often differ from interpretation of the incident from talking with the reporter, according to biosafety specialists. Biosafety specialists also noted that developing trust in reporting has the potential to be problematic because of labs’ existing reporting culture. For example, specialists noted the following influences on lab workers’ likelihood of reporting accidents or incidents: realization that there is risk associated with laboratory work; difficulty recognizing that an incident has occurred, and knowing that this disincentives for reporting, such as the threat of punishment for reporting or concerns about (1) the reputation of both the worker and the institution, (2) the potential loss of research funds, and (3) the fact that reporting may take time away from work; and lack of perceived incentives for reporting, such as the failure to see the value of reporting accidents or incidents, as well as the fact that lab work may be done alone, which does not provide an incentive for self-reporting of errors. Given the confidentiality concerns and other difficulties of introducing a voluntary reporting system into the biological lab community, deidentification of safety reports takes on more importance. For example, according to biosafety specialists at one university, a primary concern with the establishment of their SRS was anonymity, especially for those in the agricultural labs. These researchers were concerned that if their identities became known, they could suffer from retaliation from organizations opposed to their research. While the SRS managers chose to make the reports available to the public via the Web, they also deidentified the reports to prevent individuals outside the lab community from being able to identify individuals or specific labs. However, because the university research community is a small one and lab work is fairly specific, it is not overly difficult for those in the lab community to determine who was involved in an incident if a report mentions a particular pathogen and what was being done with it. As a result, deidentification measures may have to go beyond simply removing reporter information. In addition, if deidentification measures are insufficient for maintaining confidentiality, workers and entities may need added incentives to encourage reporting in light of the fact that their identities may become known. There are several incentives for the lab community to report, according to biosafety specialists. For example, deidentified SRS data can enhance the evidentiary foundation for biosafety research since it provides an extensive, heretofore unavailable data source. Such analyses benefit the overall lab community by providing greater evidentiary basis for risk based decisions for—or against—expensive or burdensome lab safety protocols. In addition, workers’ trust in reporting can be developed over time at the local level, through rewarding, nonpunitive reporting experiences. The relationship workers have with the lab’s safety staff is central to this effort, according to biosafety specialists. Trust in an institution’s Occupational Health Service, biosafety officer, or other official responsible for safety encourages workers to overcome ignorance, reluctance, or indifference to reporting. Biosafety specialists at one university credit the success of their nonpunitive SRS to the safety-focused relationship among the biosafety officer and lab staff. At first, according to these biosafety specialists, the researchers were afraid that SRS reports would be used to punish them academically or professionally. Over time, however, they saw the implementation of a nonpunitive system that had positive outcomes for safety improvements in the lab. While biosafety specialists believed that development of a reporting culture might be difficult, they offered a number of suggestions for overcoming reporting barriers, including (1) developing a safety office in conjunction with the research staff, (2) ensuring continued interaction and shared conferences on safety issues with researchers and the biosafety office to show the value of reported information, and (3) reinforcing the importance of reporting by showing a concern for the individual that is exposed rather than focusing on punishment. In addition, the CDC noted the importance of biosafety training, which is an important part of laboratory safety culture that has an impact on workers’ ability to recognize and report safety issues. This type of continued support for reporting—as evidenced through positive feedback, awards, and nonpunitive experiences and training—fosters trust and willingness to report, according to lessons learned. Existing information about the potential goals for a biological lab SRS and the organizational culture of these labs suggest certain design and implementation features in the third key area: feedback mechanisms. Figure 7 shows the relationship of program goals and organizational culture to this key area. The CDC and NIH—as recognized authorities on working safely with infectious diseases—disseminate safety information to the entire lab community. For example, documents such as the BMBL and recombinant DNA guidelines provide the foundational principles for lab safety practices; they are updated periodically to reflect new information about infectious agents and routes of exposure. In addition, the CDC’s MMWR reports provide alerts as emerging safety issues are identified. Lessons suggest that entities with industrywide visibility and recognized authority are ideally situated to ensure SRS data and safety improvement initiatives are disseminated across the industry. Such entities would be better positioned than individual labs, facilities, states, or others to disseminate SRS-based alerts or other safety reports in a way that reaches all labs. In addition, in order to counter the potential conflicts of interest that can arise with sharing data across labs, biosafety specialists we spoke with supported the notion of an “industry-level” entity for disseminating safety data. In particular, the specialists noted that the typical reporting relationship between the biosafety officer and lab management is not independent; this relationship might therefore inhibit sharing of safety data beyond the individual lab. Thus, a central, industry-level unit— responsible for collecting and disseminating SRS reports from either workers or organizations—minimizes such concerns and facilitates industrywide sharing of SRS data, according to lessons learned. SRS data can also support training, which is a key component of biosafety. These data can provide the experiential basis for specific safety precautions. For example, one biosafety specialist noted that staff want to know this information in order to accept the need for precautions and procedures. Currently, there is no such experiential database; however, an industry-level entity could facilitate the creation and maintenance of such a database from SRS data. Some of the biosafety specialists we spoke with noted the importance of ongoing monitoring of safety culture, for example, through a lab director’s personal investment of time and direct observation and communication with lab workers. Without such observation and communication, as well as feedback from workers, managers will remain unaware of areas where the safety culture is likely to lead to serious problems. While specialists did not specifically note the need for formal evaluation to solicit this feedback, lessons learned suggest that evaluation is useful in this regard. Specifically, evaluation can help identify (1) problem areas in the safety culture and (2) where targeted outreach and training or program modification might lead to better reporting and safety improvement. Such evaluation is important in ensuring the system is working as effectively as possible, according to lessons from the literature and case studies. Safety reporting systems (SRS) can be key tools for safety improvement efforts. Such systems increase the amount of information available for identifying systemic safety issues by offering a means through which workers can report a variety of events that shed light on underlying factors in the work environment that can lead to accidents. Our extensive review of SRS evaluation literature and case studies of SRS use in three industries provides an empirical, experience-based foundation for developing a framework for SRS design and implementation. This framework can be applied across a wide variety of industrial, organizational, professional, and cultural contexts. The industries we studied, despite their differences, shared similar experiences designing and using SRSs for safety improvement. The commonalities they shared provide the basis for our lessons—the pros and cons and successes and failures—relating to particular design and implementation choices across a wide variety of work environments. However, it is important to recognize the uniqueness of any work environment. The biological lab community is undoubtedly a unique working environment and blindly applying an SRS from one industry to the lab community would be a mistake. This observation underlies the leading finding among our lessons: in choosing the system features most appropriate for the environment in which the SRS will operate, consideration of program goals and organizational culture is essential. Such consideration provides the context for choosing features in three key areas of system design and implementation—reporting and analysis, reporter protections and incentives, and feedback mechanisms. The Centers for Disease Control and Prevention (CDC) and Animal and Plant Health Inspection Service (APHIS) Select Agent Program (SAP) manage a mandatory reporting system for theft, loss, and release (TLR) of select agents. Although this system is compliance-based, it can be used— like the SRSs in our study—to identify systemic safety issues. In fact, the agencies have taken steps to use the system in this way. For example, the agencies have dedicated expert resources to manage the system, developed guidance to clarify reportable events and procedures to ensure reporter confidentiality, and used information from the system to provide feedback about safety issues to the select agent lab community. However, lessons from the literature and case studies suggest additional actions in assessment and the three key areas that could further improve reporting and the usefulness of the system as a source for safety data. These elements include an assessment of organizational culture, a lower threshold for reportable events, limited immunity provisions, and mechanisms for international lesson sharing and evaluation. Through these actions, efforts to identify areas for system improvement, target outreach and training, and encourage reporting could be supported. While other industries have developed industrywide SRSs, one does not exist for the broader laboratory community. However, recognizing the potential of such a system for the laboratory community, an interagency task force on biosafety recommended it and Congress proposed legislation to develop one. While current safety guidance for biological labs is based on many years of experience working with infectious organisms and analyses of laboratory-acquired infections (LAI), there are some limitations to these data. For example, a widely recognized limitation is the high rate of under-reporting of LAIs. In addition, accident and illness data are incomplete, and reported information usually does not fully describe factors contributing to the LAIs. Such issues limit the amount of information available for identification of systemic factors that can lead to accidents. A national laboratorywide voluntary SRS that is accessible to all labs and designed around specific goals and organizational culture would facilitate collection of such data to inform safety improvements. Analysis of these data could support evidence-based modifications to lab practices and procedures, reveal problems with equipment use or design, and identify training needs and requirements. Establishing such an SRS for the lab community, however, would require addressing some unique issues. Although our findings suggest that reporting systems should be tied to program goals and a clear sense of the organizational culture, this is problematic for biological labs because they are not a clearly identified or defined population. In addition, there is no agency or entity with the authority to direct such assessments across the entire lab community. Proposed federal legislation, if enacted, would establish a role for an SRS for the lab community to be administered by the Department of Health and Human Services (HHS) and the Department of Agriculture (USDA). If HHS and USDA are directed to develop such an SRS, certain features for the three key areas are suggested by existing studies, the CDC’s and APHIS’s experiences with the TLR reporting system, and biosafety specialists’ knowledge of organizational culture in labs and experiences with safety reporting. Lessons developed from experiences with the National Institutes of Health’s (NIH) prototype reporting system for its intramural research labs might inform design and implementation considerations as well. In addition, stakeholder involvement in goal setting is particularly important given the issues related to visibility and oversight of the broader lab population. The greater the stakeholder involvement, the greater the likelihood the perspectives of labs with varying environments and cultures will be represented. Stakeholders may also have knowledge of, and access to, labs that can support cultural assessments and encourage reporting. Such assessments are important for understanding differences in organizational cultures across the diverse types and levels of labs that could affect choices for system scope and features. Until a cultural assessment is conducted, existing information about likely system goals and labs’ organizational culture suggests certain features in the three key areas—reporting and analysis, reporter protections and incentives, and feedback mechanisms. With respect to reporting and analysis, a variety of factors suggest voluntary reporting for labs outside the Select Agent Program, including likely system goals for learning rather than enforcement and the need to collect information on incidents and hazards as opposed to serious accidents. In addition, the lab community’s limited experience with this type of reporting, the diversity of lab environments, and uncertainty about the reporting population suggest an initially open classification scheme that allows workers to report events in their own words, using multimode (Web or postal) and open-format reporting options that are available to all workers. These options can facilitate reporting in such situations. Lastly, the advantages and disadvantages inherent in SRS administration at either the local or higher level suggest that dual reporting options may be necessary. Such options— present in different forms in all three case industries—allow workers to submit reports to whichever level is most comfortable for them. For example, workers would have the choice of whether to report to an internal, lab-managed reporting program that feeds data to a central authority or to an independent, externally managed SRS. Both of these reporting options will also require strong confidentiality protections, data deidentification, and other reporting incentives to foster trust in reporting. Finally, feedback mechanisms for disseminating safety data or recommendations and evaluations are needed to promote worker buy-in for reporting, identify areas for targeted outreach and training, and identify areas for system improvement. In developing legislation for a national reporting system for the biological laboratory community, Congress should consider provisions for the agency it designates as responsible for the system to take into account the following in design and implementation: include stakeholders in setting system goals; assess labs’ organizational culture to guide design and implementation make reporting voluntary, with open-reporting formats that allow workers to report events in their own words and that can be submitted by all workers in a variety of modes (Web or postal), with the option to report to either an internal or external entity; incorporate strong reporter protections, data deidentification measures, and other incentives for reporting; develop feedback mechanisms and an industry-level entity for disseminating safety data and safety recommendations across the lab community; and ensure ongoing monitoring and evaluation of the safety reporting system and safety culture. To improve the system for reporting the theft, loss, and release of select agents, we recommend that the Centers for Disease Control and Prevention and Animal and Plant Health Inspection Service Select Agent Program, in coordination with other relevant agencies, consider the following changes to their system: lower the threshold of event reporting to maximize collection of information that can help identify systemic safety issues, offer limited immunity protections to encourage reporting, and develop (1) mechanisms for sharing safety data for international lab safety improvement efforts and (2) processes for identifying reporting gaps and system evaluation to support targeted outreach and system modification. We provided a draft of this report to the Department of Transportation (DOT), HHS, INPO, NASA, NRC, USDA, and VA for review and comment. In written comments, the DOT, INPO, NASA, NRC, and VA agreed with our findings and conclusions and provided technical comments, which we addressed, as appropriate. The DOT’s FAA and NASA also provided positive comments on the quality of our review. In particular, the FAA reviewer indicated that it was an excellent report that addressed the factors that should be considered by an organization planning to implement a safety reporting system. Similarly, the NASA reviewer noted that this was an excellent document describing the many aspects of safety reporting systems, and that it had captured the complexity and dynamic nature of the SRS approach to obtaining safety information from the frontline. In written comments, the HHS noted that GAO’s thorough case studies of long-standing industrywide safety reporting systems would be helpful when considering the important issue of reporting systems in biological laboratories. However, the HHS disagreed with two of our recommendations, and partially agreed with a third, to improve the theft, loss, and release (TLR) reporting system for select agents. Specifically, the HHS disagreed with our first recommendation—to lower the threshold for reportable events to maximize information collection—noting that their current mandatory reporting thresholds for the Select Agent Program (SAP) provides sufficiently robust information. While we appreciate the CDC and APHIS Select Agent Program’s efforts to clarify reporting requirements to ensure all thefts, losses, and releases are reported, lowering reporting thresholds could further ensure all relevant reports are received. With lower reporting thresholds, questionable events are less likely to go unreported because of confusion about whether to report. Furthermore, we note that reporting below the currently established threshold could be voluntary, thereby offering registered entities a convenient, optional mechanism for sharing identified hazards. This is similar to the agencies’ recently initiated, anonymous fraud, waste, and abuse reporting system. However, reporting to the TLR system would enable follow-up and feedback with the reporting lab because of its confidential, as opposed to anonymous, nature. Lastly, biosafety specialists we spoke with, as well as HHS staff involved in updating the BMBL, specifically noted the lack of available data for developing evidence-based biosafety guidelines. Data collected through the TLR system—especially if it is more comprehensive—could provide such data. The HHS also disagreed with our second recommendation--to offer limited immunity protections to encourage reporting. While the HHS agrees that identification of safety issues is important, they believe they do not have statutory authority to offer limited immunity. The Public Health Security and Bioterrorism Preparedness and Response Act of 2002 required the Secretary of HHS to promulgate regulations requiring individuals and entities to notify HHS and others in the event of the theft, loss, or release of select agents and toxins. Violations of the Select Agent Regulations may result in criminal or civil money penalties. While we do not want to suggest that the HHS waive these penalties under a limited immunity provision, the Act sets maximum civil money penalties for Select Agent Regulations violations at $250,000 for individuals and $500,000 for entities, which provides the HHS Secretary, now delegated to the HHS Inspector General, discretion to charge penalties up to those maximum amounts. In addition, while reporting is required by law, individuals or entities may be concerned that reporting thefts, losses, or releases may lead to increased inspections by the CDC or referral to the Inspector General of the Department of Health and Human Services for investigation and possible penalties. Therefore, we recommend the CDC, in conjunction with other pertinent oversight agencies, examine whether adding limited immunity protections into the TLR reporting system would ease individuals' and entities' fears of reporting and encourage them to provide more complete information on thefts, losses, and releases. One possible way to incorporate limited immunity protections into the TLR reporting system would be to lower the civil money penalty for those individuals or entities who properly filed a TLR report should penalties be appropriate for the theft, loss, or release being reported. We believe the Secretary of HHS has sufficiently broad authority under the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 to provide such immunity protections. The literature and our case studies identified limited immunity as a key incentive for reporting, and HHS' Trans-Federal Task Force on optimizing biosafety and biocontainment oversight noted the potential of the Aviation Safety Reporting System--and its associated immunity provisions--as a model for a national SRS for biological labs. Lastly, the HHS partially agreed with the third recommendation. While the agency agreed with the recommendation to develop processes for identifying reporting gaps and system evaluation to support targeted outreach and system modification, they disagreed with the recommendation to share TLR data for international lab safety improvement efforts. In particular, the HHS notes its lack of authority to regulate foreign laboratories and suggests such activities might be better placed elsewhere in the CDC. As the literature and case studies illustrate, it is important to share safety lessons as broadly as possible. Sharing TLR lessons does not involve regulation of foreign labs, so additional authority is not required. Furthermore, the recommendation is directed to the CDC SAP because they manage the TLR system. If the CDC SAP wished to delegate the responsibility for sharing TLR lessons with the international lab community to another HHS entity, it would satisfy the intent of the recommendation. The HHS also commented on the matters for congressional consideration, for example, suggesting additional matters that fall outside the scope of this review. The agency disagreed with GAO on several issues, such as (1) the scope of the recommendations, (2) the extent to which the biological lab industry might benefit from an SRS, (3) particular SRS features noted in the matters for congressional consideration, and (4) reporting thresholds and system management. These general comments and our responses to them are included in appendix IV. The HHS also provided technical comments which we addressed, as appropriate. In written comments, the USDA concurred with our recommendations, although they noted several disagreements in their detailed responses. With respect to our first recommendation—to lower reporting thresholds—the USDA noted, like the HHS, that (1) they believe the current reporting thresholds (providing 130 reports a year) are sufficiently robust and (2) APHIS’s other monitoring and surveillance activities are sufficient for monitoring safety and security conditions in select agent labs. As noted above, we believe that with lower reporting thresholds, questionable events are less likely to go unreported because of confusion about whether to report. Furthermore, we note that reporting below the currently established threshold could be voluntary, thereby offering registered entities a mechanism for sharing identified hazards in a system that would enable follow-up and feedback with reporters. Lastly, data collected through the TLR system—especially if it is more comprehensive—could provide data for updates to biosafety guidelines. In response to our second recommendation—to offer limited immunity protections—the USDA, like the HHS, believes it lacks statutory authority to offer such protections. As noted above, we believe the Secretary of USDA has sufficiently broad authority under the Agricultural Bioterrorism Protection Act of 2002 to provide such immunity protections for the TLR reporting system. However, in recognition that such provisions might require coordination with other agencies, we added this clarification to the recommendations. Lastly, in response to our third recommendation—to (1) share TLR data for international lab safety improvement efforts and (2) identify reporting gaps and conduct system evaluation—the USDA noted that they did not believe additional regulatory oversight was needed and that targeted education and safety training in high-risk areas would likely be more cost effective. Our recommendation does not suggest any additional regulatory oversight. It is focused on broadly sharing lessons learned from the TLR system and on identifying areas—through analysis of TLR data and evaluation—for targeted outreach and training and system modification. These actions are methods through which the USDA can better identify the “high-risk areas” the agency notes should be targeted for education and training. The USDA also noted that an example we provided of unreported LAIs demonstrates that these types of infections are infrequent. However, this is just one example of LAI underreporting and their consequences. As noted in the footnote prior to this example, in a review of LAI literature, the authors identified 663 cases of subclinical infections and 1,267 overt infections with 22 deaths. The authors also note that these numbers “represent a substantial underestimation of the extent of LAIs.” SRSs are key tools for bringing forward such safety information—currently recognized as substantially underreported—in order to benefit the entire industry. USDA’s written comments are included in appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2642 or mccoolt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix details the methods we used to identify lessons for designing and implementing an effective safety reporting system (SRS) from (1) the literature and (2) case studies of SRSs in the airline, commercial nuclear power, and health care industries; and apply those lessons to (3) assess the theft, loss, and release (TLR) reporting system for the Select Agent Program and (4) suggest design and implementation considerations for a national SRS for all biological labs. To develop lessons from the literature, we used an iterative approach to search several venues (academic journals, agency and organization publications, and grey literature) for literature related to human factors, safety science, and SRS evaluation. We reviewed the publications generated through automated searches to identify (1) search terms for additional automated queries and (2) citations for publications that might be within our scope of interest. We ended the formal search for additional literature after reaching saturation in the publications generated from our search (i.e., no or few new publications). The literature we reviewed generally fell into one of two categories—safety science (including human factors and organizational safety) literature and descriptions of SRS features and evaluations. The safety science literature serves as background information and was also used to develop familiarity with safety science terms and theories required for our assessment of the SRS evaluation literature. The literature related to SRS features and evaluations was used to develop lessons for the first objective. We assessed the SRS evaluation literature for both methodological rigor and findings related to SRS design and implementation. For the methodological review, we assessed the appropriateness of the methods relative to the study objectives for all articles, and a sample (about half) received a secondary, independent review of methodological rigor. Studies that met our standards of methodological rigor were incorporated into the assessment, and findings related to system goals, cultural considerations, reporting and analysis features, reporter protections and incentives, and feedback mechanisms were coded to identify effective features and processes for SRS design and implementation. See the Bibliography of Articles Used to Develop SRS Lessons from the Literature for a list of the literature used to develop these lessons. To develop lessons from case studies of three industries, we (1) reviewed studies and documentation on a variety of SRSs in the three industries; (2) interviewed agency and organization officials knowledgeable about safety science and human factors engineering, reporting systems, and their own SRS programs; and (3) attended a variety of SRS and safety conferences. We chose to focus on the aviation, commercial nuclear power, and health care industries because they are moderate- to high-risk industries that represent a variety of (1) organizational cultures, (2) length of experience using SRSs for safety improvement, and (3) feature and design choices in their SRS programs. While we collected information on a wide variety of safety reporting programs and systems in these industries—and in some cases comment on these different programs—we primarily developed our lessons from one reporting program in each of the three industries. Specifically, we developed lessons from the Federal Aviation Administration’s (FAA) National Aeronautic and Space Administration (NASA)-run Aviation Safety Reporting System (ASRS) in aviation, the Institute of Nuclear Power Operation’s (INPO®) Significant Event Evaluation-Information Network (SEE-IN®) system in commercial nuclear power, and the VA’s internally managed Patient Safety Information System (PSIS) and NASA-managed Patient Safety Reporting System (PSRS) in VA health care. We chose to focus on these systems because they represent fairly long-standing, nonregulatory, domestic, industrywide or servicewide reporting programs. For example, NASA’s ASRS has been in operation for 34 years; INPO’s SEE-IN, for 30 years; and VA’s PSIS and PSRS, for 10 years. Although we primarily developed our lessons from these key SRSs, we also collected information on other notable SRSs in the industries, including the Nuclear Regulatory Commission’s (NRC) Allegations Program, the FAA’s Aviation Safety Action Program (ASAP), and the Agency for Healthcare Research and Quality’s (AHRQ) Patient Safety Organizations (PSO) program, among others. To assess the TLR reporting system, we interviewed agency officials, reviewed agency and other documentation, and applied lessons from the literature and case studies to these findings. Specifically, using a standard question set, we interviewed HHS officials from the Coordinating Center for Infectious Disease, Office of Health and Safety, and Division of Select Agents and Toxins, and received responses to our question set from the USDA’s Animal and Plant Health Inspection Service (APHIS). In addition, we attended an agency conference on select agent reporting and reviewed documents from this conference and from the National Select Agent Registry (NSAR) Web site, detailing TLR reporting requirements and scenarios. We also reviewed GAO testimony and reports on previously identified TLR reporting issues. Using the lessons for SRS design and implementation derived from the literature and case studies, we applied these criteria to identify areas for TLR improvements. To propose design and implementation considerations for a national biological laboratory reporting system, we reviewed studies and other reports on biosafety, interviewed HHS officials and domestic and international biosafety specialists, attended conferences on biosafety and incident reporting, and applied lessons from the literature and case studies to these findings. We interviewed HHS officials and biosafety specialists to get a sense of the culture-related context for, and potential barriers to, an SRS for biological labs. Specifically, we used a standardized question set to gather specialists’ views about overall design and implementation considerations for a labwide reporting program, as well as how lab culture and safety orientation (1) vary by level and type of lab; (2) affect reporting under current requirements; and (3) might affect reporting to a national biological lab SRS. We conducted this performance audit from March 2008 through September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. (2) Involve stakeholders (e.g., management, industry groups, associations, and workers) in development of program goals and SRS design to increase support among key populations. (1) Assessment, dedicated resources, and management focus are needed to understand and improve safety culture. (1a) Assessing safety culture can alert management to workplace safety issues. (3) Assess organizational culture to guide system design choices in the three key areas. (4) Ensure that reporters and system administrators receive adequate training regarding the function and application of the reporting system. (1b) Improving safety culture requires dedicated resources, including time, training, and staff investment. (1c) Changing safety culture requires management focus. (1) Base the decision for mandatory or voluntary reporting on (a) the level of event of interest and (b) whether the SRS will be used primarily for enforcement or learning. (2) Broad reporting thresholds, experience- driven classification schemes, and processing at the local level can be useful SRS features in industries new to safety reporting. (2) Set reporting thresholds that are not so high that reporting is curtailed, nor so low that the system is overwhelmed by the number and variety of reportable events. (2a) Broad thresholds and open reporting are useful features when starting an SRS. (1) Develop classification schemes and associated terms that are clear, easy to understand, and easy to use by drawing on terms already well understood in the industry. (2b) Encouraging workers to report incidents in their own words facilitates reporting initially. (2) Test whether classification terms are clearly understood by different groups in the organization. (3) Allow sufficient flexibility to (a) avoid narrowing the scope of reporting in a way that limits all events of interest at the chosen level of event, (b) allow different sites—if multiple sites will be reporting to the same system—to adapt fields and elements to match their own organizational culture, and (c) capture different types of events and precursors as they can change over time. (4) Develop a classification scheme that best suits analytical requirements and the comfort level of the organizational culture with safety reporting and safety event terms. (1) Base decisions about report mode on (a) the accessibility of the mode to the reporting population and (b) workers’ concerns about and willingness to report. (2) Base decisions about report formats on the (a) type of data needed for analysis, (b) capabilities of the reporting population, and (c) maturity of existing safety event classification schemes within the industry. (2c) Reporting options with some local-level processing facilitates reporting initially. (1) Base the decision for internal or external system administration on (a) workers’ degree of concern over punishment and confidentiality and (b) the availability of internal expertise and resources to analyze and encourage reporting. (2) Base decisions about who will be allowed to report on (a) awareness of reporting hierarchies and (b) the type of information desired for analysis. (1) Use a report prioritization process to quickly and efficiently address key safety issues as they arise. (2) Align analysis decisions with (a) report formats, (b) system administration and location of technical expertise, and (c) availability of other relevant data needed for analysis. (1) Base the choice between anonymity and confidentiality on (a) organizational culture, especially workers’ degree of concern about punishment and confidentiality, and (b) the amount of detail required for analysis and whether it can be collected without follow-up. (3) Strong legal protections and incentives encourage reporting and help prevent confidentiality breaches. (2) Consider a hybrid system in which confidential and anonymous reporting are used simultaneously if there is conflict between organizational culture and data need. (1) Develop data deidentification measures to support confidentiality and data-sharing efforts. (1) Consider limited immunity provisions to increase the reporting incentive. (4) A central, industry-level unit facilitates lesson sharing and evaluation. (1) Provide direct feedback to reporters to foster worker-specific buy-in for reporting. (2) Provide regular, timely, and routine feedback—for example in the form of newsletters, e-mail alerts, Web sites, and searchable databases—to support overall organizational buy-in for reporting. (3) Provide positive feedback to managers who receive a high volume of reports to demonstrate the importance of reporting and counteract the perception that error reporting reflects poorly on management. (1) Use the data to identify reporting gaps for targeted outreach and training. (2) Evaluate the effectiveness of the SRS to support ongoing modification and improvement. The following are GAO’s comments on the Department of Health and Human Services’ letter, dated August 16, 2010. 1. We disagree. We do understand that the scope of statutory authority for the Select Agent Program is limited to registered entities. That is why our recommendations for improvements to the TLR program are directed to the CDC and APHIS, while recommendations for a national SRS for all labs are directed to Congress through matters for consideration. We do not make recommendations for the national SRS to the CDC or APHIS because they do not have authority for labs outside the Select Agent Program. Furthermore, the recommendations, as well as the matters for congressional consideration, are directly linked and logically follow from the data presented in the report. This report has two objectives (the third and fourth) related to an SRS for biological labs and two sets of recommendations that flow from those objectives. We have structured our report this way because we recognize that the statutory authority for the Select Agent Program is limited to the oversight of biosafety at registered entities and that creation of a new safety reporting system would require new authority and resources, in particular: Objective 3—applying lessons from SRS literature and case studies to assess the theft, loss, and release (TLR) reporting system, part of the Select Agent Program—focuses on the TLR system, and thus applies to only registered entities and associated labs. The recommendations derived from this review of the TLR system are directed to the CDC and APHIS Select Agent Program because they have the statutory authority for this system. Objective 4—applying lessons from SRS literature and case studies to suggest design and implementation considerations for a national safety reporting system—applies to all biological laboratories, in particular those outside the Select Agent Program. Because there is currently no agency with specific authority for such a system to whom we could direct recommendations, they are directed to Congress through Matters for Congressional Consideration. 2. We disagree. We recognize that implementation of any program has costs. However, evidence from the literature indicates that the benefits of an SRS can far outweigh the costs; this position was also endorsed by experts from the three case study industries. While we certainly encourage the NIH and CDC Select Agent Program efforts to share information that is currently reported, assessing the sufficiency of existing data was not within the scope of this engagement. In its comments to an earlier report on oversight of high-containment labs (GAO-09-574), the HHS agreed with our recommendation that lessons learned should be synthesized and shared with the broader community. They further noted that while the HHS and USDA have the ability to gather such data for laboratories registered with the Select Agent Program, a separate mechanism must be identified to gather information about releases in laboratories that do not work with select agents. A national SRS for all biological laboratories is such a mechanism. In addition, the Trans-federal Task Force on Optimizing Biosafety and Biocontainment Oversight—co-chaired by the HHS and USDA—recommended a new voluntary, nonpunitive incident-reporting system, and pending legislation in both the House and Senate would establish such a system. For these reasons, we did not revisit the issue of whether a nationwide SRS for biological labs is necessary. Instead, we agreed to examine the literature and SRSs in other industries to support effective design and implementation of such a system, should it be established. 3. The concerns raised here do not accurately characterize the message and matters conveyed in the report, and are not supported by evidence from the literature and our case studies. Specifically, (1) our recommendation to allow workers to report in their own words does not equate to “free-form reporting.” Rather, it relates to how errors are classified and labeled and where in the process that should take place. (See sections “Lesson 2: Broad Reporting Thresholds, Experience- Driven Classification Schemes, and Processing at the Local Level Are Useful Features in Industries New to Safety Reporting” and “Encouraging Workers to Report Incidents in Their Own Words Facilitates Reporting Initially” for further detail.) In commenting on this issue, an internationally recognized SRS expert at NASA noted that, while highly structured reporting forms may decrease the analytical workload, the data quality is largely sacrificed for this false sense of efficiency. Requiring the reporter to also be the analyst— evaluating aspects of the event—creates unreliable assessments because of the variability in workers’ perspectives. Open-field narrative has the best hope of providing insights that are largely unknown by personnel who invent the structured questions. Consequently, allowing workers to report in their own words and applying error classifications at the analytical level serve to improve, rather than degrade, data quality. In addition, an SRS does not inherently produce unintelligible reports, redundant data, lack of quality control, and unreliable statistics. One of our key messages is that determining system goals—such as for specific analytical capabilities or means to identify specific locations or groups—is essential to do up front, in order to select system features compatible with these goals. In the section “Program Goals and Organizational Culture Guide Safety Reporting System Design and Implementation in Three Key Areas,” we describe the pros and cons of different system features and how choices for specific features should logically flow from system goals and assessment of organizational culture. We have recommended, for congressional consideration, certain features for a national SRS for biological labs that appear best aligned with existing information about system goals and lab culture. 4. The importance of culture in SRS design and implementation is foundational in our report, and is reflected in our graphics, findings, conclusions, and matters for congressional consideration. 5. We agree that this is a useful clarification and have made this change, as appropriate, throughout the report. 6. We do not confuse the TLR with a safety reporting system. We are aware that the system serves a regulatory function, and recognize this in the body of the report. However, we also recognize that this is not a dichotomy—the TLR’s regulatory function does not preclude its usefulness as a safety tool. In fact, we commend the CDC and APHIS Select Agent Program for recognizing the TLR’s potential beyond its mere regulatory function. In particular, in the section “The CDC and APHIS have Taken Steps to Improve the Usefulness of the TLR Reporting System; Lessons from the Literature and Case Studies Suggest Additional Steps,” we comment on the agencies’ recognition of the system’s usefulness for providing safety improvement data and our recommendations reflect enhancements to the system for this purpose. In addition, while we agree that a national reporting system might address the issue of capturing events (such as near misses or identified hazards) that are below the threshold for reporting to the TLR system, no such system currently exists. Consequently, the TLR system is the only system ideally situated to capture this information. 7. We recognize that implementation of any program has costs. However, evidence from the literature indicates that the benefits of an SRS can far outweigh the costs, a position that was also endorsed by experts from the three case study industries. We agree that dedicating resources is essential to successfully implement an SRS program, and this is reflected in the first lesson derived from the case studies— ”Assessment, dedicated resources, and management focus are needed to understand and improve safety culture.” However, it is outside the scope of this report to add a matter for congressional consideration to assess the relative priority of implementing a safety reporting system as compared to other biosafety improvements. See also comment #2 above, in response to HHS’s earlier remark about evaluating whether, and not how, to develop a national SRS for biological labs. 8. We agree this is an important consideration. In the section “Level of Event: The Severity of Events Captured Generally Determines Whether an SRS Is Mandatory or Voluntary,” we note that mandatory reporting is generally preferred when program goals are focused on enforcement of regulations. Serious events—such as accidents resulting in injuries or deaths—are typically the level of event collected in mandatory SRSs, whereas voluntary reporting is generally preferred when learning is the goal. The purpose of a national SRS for all labs would likely be for learning rather than compliance because the SAP program, through the TLR system, already manages the regulatory function for the most dangerous pathogens. Accordingly, it is logical that a national SRS for all biological labs would be a voluntary, nonregulatory system. 9. Evidence from the literature and our case studies does not support this argument. While we appreciate the NIH’s concerns about the clarity of reporting requirements, we found that mandatory and voluntary systems are often employed concurrently—sometimes independently and sometimes in complementary roles—because programs face the dual requirements of regulating and promoting safety improvement. In order to ensure appropriate levels of reporting, however, we also note the importance of setting clear goals and reporting thresholds for each system and communicating reporting requirements to the lab community. In addition, evaluation is an important tool for identifying and addressing such problems. Consequently, we recommended evaluation for both the TLR system and the national SRS for biological labs. In addition to the contact named above, Rebecca Shea, Assistant Director; Amy Bowser; Barbara Chapman; Jean McSween; Laurel Rabin; and Elizabeth Wood made major contributions to this report. Aagaard, L., B. Soendergaard, E. Andersen, J. P. Kampmann and E. H. Hansen. “Creating Knowledge About Adverse Drug Reactions: A Critical Analyis of the Danish Reporting System from 1968 to 2005.” Social Science & Medicine, vol. 65, no. 6 (2007): 1296-1309. Akins, R. B. “A Process-centered Tool for Evaluating Patient Safety Performance and Guiding Strategic Improvement.” In Advances in Patient Safety: From Research to Implementation, 4,109-125. Rockville, Md: Agency for Healthcare Research and Quality, 2005. Anderson, D. J. and C. S. Webster. “A Systems Approach to the Reduction of Medication Error on the Hospital Ward.” Journal of Advanced Nursing, vol. 35, no. 1 (2001): 34-41. Arroyo, D. A. “A Nonpunitive, Computerized System for Improved Reporting of Medical Occurrences.” In Advances in Patient Safety: From Research to Implementation, 4, 71-80. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. Bakker, B. “Confidential Incident Reporting Systems For Small Aviation Communities on a Voluntary Basis.” Aviation Safety (1997): 790-720. Baldwin, I., U. Beckman, L. Shaw and A. Morrison. “Australian Incidence Monitoring Study in Intensive Care: Local Unit Review Meetings and Report Management.” Anaesthesia and Intensive Care, vol. 26, no. 3 (1998): 294-297. Barach, P. and S. D. Small. “Reporting and Preventing Medical Mishaps: Lessons from Non-medical Near Miss Reporting Systems.” British Medical Journal, 320 (2000): 759-763. Battles, J. B., H. S. Kaplan, T. W. Van der Schaaf and C. E Shea. “The Attributes of Medical Event-Reporting Systems: Experience with a Prototype Medical Event-Reporting System for Transfusion Medicine.” Archives of Pathology & Laboratory Medicine, vol. 122, no. 3 (1998): 231- 238. Battles, J. B., N. M. Dixon, R. J. Borotkanics, B. Rabin-Fastmen and H. S. Kaplan. “Sensemaking of Patient Safety Risks and Hazards.” Health Services Research, vol. 41, no. 4 (2006): 1555-1575. Reporting Tools.” International Journal of Applied Aviation Studies, vol. 2, no. 2 (2002): 11-36. Beckett, M. K., D. Fossum, C. S. Moreno, J. Galegher and R. S. Marken. “A Review of Current State-Level Adverse Medical Event Reporting Practices Toward National Standards.” RAND Health: Technical Report. 2006. Berkowitz, E. G., M. E. Ferrant, L. B. Goben, K. E. McKenna, and J. L. Robey. “Evaluation of Online Incident Reporting Systems.” Duke University School of Nursing (2005): 1-27. Billings, C. E. Some “Hopes and Concerns Regarding Medical Event- Reporting Systems.” Archives of Pathology & Laboratory Medicine, vol. 122, no. 3 (1998): 214-215. Bloedorn, E. Mining Aviation Safety Data: A Hybrid Approach. The MITRE Corporation, 2000. Braithwaite, J., M. Westbrook, and J. Travaglia. “Attitudes Toward the Large-scale Implementation of an Incident Reporting System.” International Journal for Quality in Health Care, vol. 20, no. 3 (2008): 184-191. Centers for Disease Control and Prevention. CDC Workbook on Implementing a Needlestick Injury Reporting System. 2008. Chidester, T. R. Voluntary Aviation Safety Information-Sharing Process: Preliminary Audit of Distributed FOQA and ASAP Archives Against Industry Statement of Requirements. DOT/FAA/AM-07/7. A report prepared at the request of the Federal Aviation Administration. 2007. Clarke, J. R. “How a System for Reporting Medical Errors Can and Cannot Improve Patient Safety.” The American Surgeon, vol. 72, no. 11 (2006): 1088-1091. Connell, L. J. Cross-Industry Applications of a Confidential Reporting Model, 139-146. Washington D.C.: National Academy of Engineering, 2004. Council of Europe: Expert Group on Safe Medication Practices. Creation of a Better Medication Safety Culture in Europe: Building Up Safe Medication Practices. P-SP-PH/SAFE. 2006. Dameron, J. and L. Ray. Hospital Adverse Event Reporting Program: an Initial Evaluation. Oregon Patient Safety Commission, 2007. Daniels, C. and P. Marlow. Literature Review on the Reporting of Workplace Injury Trends. HSL/2005/36. Buxton, Derbyshire, UK: Health and Safety Laboratory, 2005. Department of Defense. Assistant Secretary of Defense for Health Affairs. Military Health System Clinical Quality Assurance Program Regulation. DoD 6025.13-R. 2004. Desikan, R., M. J. Krauss, W. Claiborne Dunagan, E. C. Rachmiel, T. Bailey, and V. J. Fraser. “Reporting of Adverse Drug Events: Examination of a Hospital Incident Reporting System.” In Advances in Patient Safety: From Research to Implementation, 1, 145-160. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. Evans, S. M., J. G. Berry, B. J. Smith, A. Esterman, P. Selim, J. O’Shaughnessy and M. DeWit. “Attitudes and Barriers to Incident Reporting: A Collaborative Hospital Study.” Quality and Safety in Health Care, 15 (2006): 39-43. Fernald, D. H., W. D. Pace, D. M. Harris, D. R. West, D. S. Main and J. M. Westfall. “Event Reporting to a Primary Care Patient Safety Reporting System: A Report from the ASIPS Collaborative.” Annals of Family Medicine, vol. 2, no. 4 (2004): 327-332. Flack, M., T. Reed, J. Crowley, and S. Gardner. “Identifying, Understanding, and Communicating Medical Device Use Errors: Observations from an FDA Pilot Program.” In Advances in Patient Safety: From Research to Implementation, 3, 223-233. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. Flink, E., C. L. Chevalier, A. Ruperto, P. Dameron, F. J. Heigel, R. Leslie, J. Mannion and R. J. Panzer. “Lessons Learned from the Evolution of Mandatory Adverse Event Reporting Systems.” In Advances in Patient Safety: From Research to Implementation, 1-4, 135-151. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. Flowers, L. and T. Riley. State-based Mandatory Reporting of Medical Errors: An Analysis of the Legal and Policy Issues. National Academy for State Health Policy, 2001. Frey, B., V. Buettiker, M. I. Hug, K. Waldvogel, P. Gessler, D. Ghelfi, C. Hodler and O. Baenziger. Does Critical Incident Reporting Contribute to Medication Error Prevention?” European Journal of Pediatrics, vol. 161, no. 11 (2002): 594-599. Ganter, J. H., C. D. Dean, and B. K. Cloer. Fast Pragmatic Safety Decisions: Analysis of an Event Review Team of the Aviation Safety Action Partnership. SAND2000-1134. Albuquerque, N.M.: Sandia National Laboratories, 2000. Gayman, A. J., A. W. Schopper, F. C. Gentner, M. C. Neumeier, and W. J. Rankin. Crew Resource Management (CRM) Anonymous Reporting System (ARS) Questionnaire Evaluation. CSERIAC Report CSERIAC- RA-96-003. 1996. Global Aviation Information Network (GAIN) Working Group E. “A Roadmap to a Just Culture: Enhancing the Safety Environment.” Flight Safety Digest, vol. 24, no. 3 (2005): 1-48. Grant, M. J. C. and G. Y. Larsen. “Effect of an Anonymous Reporting System on Near-miss and Harmful Medical Error Reporting in a Pediatric Intensive Care Unit.” Journal of Nursing Care Quality, vol. 22, no. 3 (2007): 213-221. Harper, M. L. and R. L. Helmreich. “Identifying Barriers to the Success of a Reporting System. Advances.” In Advances in Patient Safety: From Research to Implementation, 3, 167-179. Rockville, Md.: Agency for Healthcare Research and Quality, 2004. Hart, C. A. “Stuck on a Plateau: A Common Problem.” In Accident Precursor Analysis and Management: Reducing Technological Risk Through Diligence, 147-154. Phimister, J. R., V. M. Bier, and H. C. Kunreuther, Eds. Washington, D.C.: National Academies Press, 2004 Holden, R. J. and B.-T. Karsh. “A Review of Medical Error Reporting System Design Considerations and a Proposed Cross-Level Systems Research Framework.” Human Factors; the Journal of the Human Factors Society, vol. 49, no. 2 (2007): 257-276. Holzmueller, C. G., P. J. Pronovost, F. Dickman, D. A. Thompson, A. W. Wu, L. H. Lubomski, M. Fahey, D. M. Steinwachs, L. Engineer, A. Jaffrey, et al. “Creating the Web-based Intensive Care Unit Safety Reporting System.” Journal of the American Medical Informatics Association, vol. 12, no. 2 (2005): 130-139. International Atomic Energy Agency. The IAEA/NEA Incident Reporting System: Using Operational Experience to Improve Safety. International Atomic Energy Agency. Safety Culture. A Report by the International Nuclear Safety Advisory Group. Safety Series: 75-INSAG-4. International Nuclear Safety Advisory Group, 1991. Johnson, C. “Software Tools to Support Incident Reporting in Safety- Critical Systems.” Safety Science, vol. 40, no. 9 (2002): 765-780. Kaplan, H. and P. Barach. “Incident Reporting: Science or Protoscience? Ten Years Later.” Quality and Safety in Health Care, vol. 11, no. 2 (2002): 144-145. Kaplan, H., J. Battles, Q. Mercer, M. Whiteside and J. Bradley. A Medical Event Reporting System for Human Errors in Transfusion Medicine, 809-814. Lafayette, Ind.: USA Publishing, 1996. Kaplan, H.S. and B. R. Fastman. “Organization of Event Reporting Data for Sense Making and System Improvement.” Quality and Safety in Health Care, vol. 12 (2003): ii68-ii72. Khuri, S. F. “Safety, Quality, and the National Surgical Quality Improvement Program.” The American Surgeon, vol. 72, no. 11 (2006): 994-998. Krokos, K. J. and D. P. Baker. Development of a Taxonomy of Causal Contributors for Use with ASAP Reporting Systems, 1-59. American Institutes for Research, 2005. Leape, L. L. Reporting of Adverse Events. The New England Journal of Medicine, vol. 347, no. 20 (2002): 1633-1639. Lee, R. The Australian Bureau of Air Safety Investigation, in Aviation Psychology: A Science and a Profession, 229-242. U.K.: Ashgate Publishing, 1998. Martin, S. K., J. M. Etchegaray, D. Simmons, W. T. Belt and K. Clark. Development and “Implementation of The University of Texas Close Call Reporting System.” Advances in Patient Safety, vol. 2 (2005): 149-160. Morters, K., and R. Ewing. “The Introduction of a Confidential Aviation Reporting System into a Small Country.” Human Factors Digest, vol. 13 (1996): 198-203. Murff, H. J., D. W. Byrne, P. A. Harris, D. J. France, C. Hedstrom, and R. S. Dittus. “‘Near-Miss’ Reporting System Development and Implications for Human Subjects Protection.” In Advances in Patient Safety: From Research to Implementation, 3, 181-193. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. Nakajima,K., Y. Kurata and H. Takeda. “A Web-based Incident Reporting System and Multidisciplinary Collaborative Projects for Patient Safety in a Japanese Hospital.” Quality and Safety in Health Care, vol. 14 (2005): 123-129. National Academy of Engineering of the National Academy. 2004. “The Accident Precursors Project: Overview and Recommendations.” In Accident Precursor Analysis and Management: Reducing Technological Risk Through Diligence, 1-34. Phimister, J. R., V. M. Bier, and H. C. Kunreuther, Eds. Washington, D.C.: National Academies Press, 2004. National Aeronautics and Space Administration. ASRS: The Case for Confidential Incident Reporting Systems. Pub 60. 2001. National Transportation Safety Board. Current Procedures for Collecting and Reporting U.S. General Aviation Accident and Activity Data. NTSB/SR-05/02. 2005. Nguyen, Q.-T., J. Weinberg, and L. H. Hilborne. “Physician Event Reporting: Training the Next Generation of Physicians.” In Advances in Patient Safety: From Research to Implementation, 4, 353-360. Rockville, Md.: Agency for Healthcare Research and Quality, 2005 Nielsen, K. J., O. Carstensen, K. and Rasmussen. “The Prevention of Occupational Injuries in Two Industrial Plants Using an Incident Reporting Scheme.” Journal of Safety Research, vol. 37 (2006): 479-486. Nørbjerg, P. M. “The Creation of an Aviation Safety Reporting Culture in Danish Air Traffic Control.” CASI (2003): 153-164. Advances in Patient Safety: From Research to Implementation, 4, 361- 374. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. O’Leary, M. J and S. L. Chappell. Early Warning: Development of Confidential Incident Reporting Systems. NASA Center for AeroSpace Information, 1996. Page, W. D., E. W. Staton, G. S. Higgins, D. S. Main, D. R. West and D. M. Harris. “Database Design to Ensure Anonymous Study of Medical Errors: A Report from the ASIPS Collaborative.” Journal of the American Medical Informatics Association, vol. 10, no. 6 (2003): 531-540. Patankar, M. S. and J. Ma. “A Review of the Current State of Aviation Safety Action Programs in Maintenance Organizations.” International Journal of Applied Aviation Studies, vol. 6. no. 2 (2006): 219-233. Phillips, R. L., S. M. Dovey, J. S. Hickner, D. Graham and M. Johnson. “The AAFP Patient Safety Reporting System: Development and Legal Issues Pertinent to Medical Error Tracking and Analysis.” In Advances in Patient Safety: From Research to Implementation, 3, 121-134. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. Phimister, J. R., V. M. Bier and H. C. Kunreuther. “Flirting with Disaster.” Issues in Science and Technology (2005). Pronovost, P. J., B. Weast, C. G. Holzmueller, B. J. Rosenstein, R. P. Kidwell, K. B. Haller, E. R. Feroli, J. B. Sexton, and H. R. Rubin. “Evaluation of the Culture of Safety: Survey of Clinicians and Managers in an Academic Medical Center.” Quality and Safety in Health Care, vol. 12, no. 6 (2003): 405-410. Ramanujam, R., D. J. Keyser and C. A. Sirio. “Making a Case for Organizational Change in Patient Safety Initiatives.” In Advances in Patient Safety: From Research to Implementation, 2, 455-465. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. Raymond, B. and R. M. Crane. Design Considerations for Patient Safety Improvement Reporting System. Kaiser Permanente Institute for Health Policy, NASA Aviation Safety Reporting System, and The National Quality Forum, 2000. Rejman, Michael H. Confidential Reporting Systems and Safety-Critical Information, 397-401. Columbus, Ohio: Ohio State University, 1999. Reynard, W.D., C.E. Billings, E.S. Cheaney and R. Hardy. The Development of the NASA Aviation Safety Reporting System, Pub 34. NASA Reference Publication, 1986. Ricci, M., A. P. Goldman, M. R. de Leval, G. A. Cohen, F. Devaney and J. Carthey. “Pitfalls of Adverse Event Reporting in Pediatric Cardiac Intensive Care.” Archives of Disease in Childhood, 89 (2004): 856-859. Ruchlin, H. S., N. L. Dubbs, M. A. Callahan and M. J. Fosina. “The Role of Leadership in Installing a Culture of Safety: Lessons from the Literature.” Journal of Healthcare Management, vol. 49, no. 1 (2004): 47-59. Schleiffer, S. C. “We Need to Know What We Don’t Know.” International Air Safety Seminar, 35 (2005): 333-340. Snijders, C., R. A. van Tingen, A .Molendijk, and W. P. F. Fetter. “Incidents and Errors in Neonatal Intensive Care: A Review of the Literature.” Archives of Disease in Childhood, Fetal and Neonatal Editon, vol. 92, no. 5 (2007): 391-398. Staender, S., J. Davies, B. Helmreich, B. Sexton and M. Kaufmann. “The Anaethesia Critical Incident Reporting System: An Experience Based Dataset.” International Journal of Medical Informatics, vol. 47, no. 1-2 (1997): 87-90. Stainsby, D., H. Jones, D. Asher, C. Atterbury, A. Boncinelli, L. Brant, C. E. Chapman, K. Davison, R. Gerrard, A. Gray et al. “Serious Hazards of Transfusion: A Decade of Hemovigilance in the UK.” Transfusion Medicine Reviews, vol. 20, no. 4 (2006): 273-282. Stalhandske, E., J. P. Bagian, and J. Gosbee. “Department of Veterans Affairs Patient Safety Program.” American Journal of Infection Control, vol. 30, no. 5 (2002): 296-302. Stump, L. S. “Re-engineering the Medication Error-Reporting Process: Removing the Blame and Improving the System.” American Journal of Health-System Pharmacy, vol. 57, no. 24 (2000): S10-S17. Suresh, G., J. D. Horbar, P. Plsek, J. Gray, W. H. Edwards, P. H. Shiono, R. Ursprung, J. Nickerson, J. F. Lucey, and D. Goldmann. “Voluntary Anonymous Reporting of Medical Errors for Neonatal Intensive Care.” Pediatrics, 113 (2004): 1609-1618. Tamuz, M. “Learning Disabilities for Regulators: The Perils of Organizational Learning in the Air Transportation Industry.” Administration & Society, 33 (2001): 276-302. Tamuz, M. and E. J. Thomas. “Classifying and Interpreting Threats to Patient Safety in Hospitals: Insights from Aviation.” Journal of Organizational Behavior, 27 (2006): 919-940. Taylor, J. A., D. Brownstein, E. J. Klein, and T. P. Strandjord. “Evaluation of an Anonymous System to Report Medical Errors in Pediatric Inpatients.” Journal of Hospital Medicine, vol. 2, no. 4 (2007): 226-233. Tuttle, D., R. Holloway, T. Baird, B. Sheehan, and W. K. Skelton. “Electronic Reporting to Improve Patient Safety.” Quality and Safety in Health Care, 13 (2004): 281-286. U.S. Department of Energy. Office of Environment, Safety and Health. Occurrence Reporting and Processing of Operations Information. DOE 231.1-2. 2003. U.S. Nuclear Regulatory Commission. Working Group on Event Reporting. Final Report of the Working Group on Event Reporting. 2001. Ulep, S. K. and S. L. Moran. 2005. “Ten Considerations for Easing the Transition to a Web-based Patient Safety Reporting System.” In Advances in Patient Safety: From Research to Implementation, 3, 207-222. Rockville, Md.: Agency for Healthcare Research and Quality. Underwood, P. K (LTC). Medical Errors: An Error Reduction Initiative,1- 75. U.S. Army—Baylor University Graduate Program in Healthcare Adminstration, 2001. van der Schaaf, T. W. and L. Kanse. Checking for Biases in Incident Reporting, 119-126. Washington D.C.: National Academy of Engineering, 2004. van der Schaaf, T. W. “Medical Applications of Industrial Safety Science.” Quality and Safety in Health Care, vol. 11, no. 3 (2002): 205-206. Wallace, B., A. Ross and J. B. Davies. “Applied Hermeneutics and Qualitative Safety Data: The CIRAS Project.” Human Relations, vol. 56, no. 5 (2003): 587-607. Webster, C. S. and D. J. Anderson. “A Practical Guide to the Implemenation of an Effective Incidence Reporting Scheme to Reduce Medication Error on the Hospital Ward.” International Journal of Nursing Practice, vol. 8, no. 4 (2002): 176-183. Weinberg, J., L. H. Hilborne, Q.-T. Nguyen. “Regulation of Health Policy: Patient Safety and the States.” In Advances in Patient Safety: From Research to Implementation, 1, 405-422. Rockville, Md.: Agency for Healthcare Research and Quality, 2005. Weiner, B. J., C. Hobgood, and M. Lewis. “The Meaning of Justice in Safety Incident Reporting.” Social Science & Medicine, vol. 66, no. 2 (2008): 403- 413. Wiegmann, D. A. and T.L. von Thaden. The Critical Event Reporting Tool (CERT); Technical Report. University of Illinois at Urbana-Champaign: Aviation Research Lab, Institute of Aviation. ARL-01-7/FAA-01-2. 2001. Wilf-Miron, R., I. Lewenhoff, Z. Benyamini, and A. Aviram. “From Aviation to Medicine: Applying Concepts of Aviation Safety to Risk Management in Ambulatory Care.” Quality and Safety in Health Care, vol.12, no. 1 (2003): 35-39. Wu, A. W, P. Pronovos and L. Morlock. “ICU Incident Reporting Systems.” Journal of Critical Care, vol. 17, no. 2 (2002): 86-94. Yong, K. “An Independent Aviation Accident Investigation Organization in Asia Pacific Region—Aviation Safety Council of Taiwan.” International Air Safety Seminar Proceedings, 173-180. 2000. Barhydt, R. and C. A. Adams. Human Factors Considerations for Area Navigation Departure and Arrival Procedures. A report prepared for NASA. 2006. Besser, R. E. Oversight of Select Agents by the Centers for Disease Control and Prevention. Testimony before Subcommittee on Oversight and Investigations, Committee on Energy and Commerce, United States House of Representatives. 2007 Center for Biosecurity. University of Pittsburgh Medical Center. Response to the European Commission’s Green Paper on Bio-preparedness. 2007. Gronvall G. K., J. Fitzgerald, A. Chamberlain, T. V. Inglesby, and T. O’Toole. “High-Containment Biodefense Research Laboratories: Meeting Report and Center Recommendations.” Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, vol. 5, no. 1 (2007): 75-85. Gronvall G. K. Germs, Viruses, and Secrets: The Silent Proliferation of Bio-Laboratories in the United States. University of Pittsburgh Medical Center, Center for Biosecurity, 2007. Gronvall G. K., J. Fitzgerald, T.V. Inglesby, and T. O’Toole. “Biosecurity: Responsible Stewardship of Bioscience in an Age of Catastrophic Terrorism.” Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, vol. 1, no. 1 (2003): 27-35. Hallbert, B., R. Boring, D. Gertman, D. Dudenhoeffer, A. Whaley, J. Marble, J. Joe, and E. Lois. Human Event Repository and Analysis (HERA) System, Overview, vol 1. Idaho National Laboratory, U.S. Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. NUREG/CR-6903, 2006. Hallbert, B and A. Kolaczkowski, eds. The Employment of Empirical Data and Bayesian Methods in Human Reliability Analysis: A Feasibility Study. Office of Nuclear Regulatory Research, United States Nuclear Regulatory Commission. NUREG/CR-6949. 2007. Hallbert, B., A. Whaley, R. Boring, P. McCabe and Y. Chang. Human Event Repository and Analysis (HERA): The HERA Coding Manual and Quality Assurance, vol 2. Idaho National Laboratory, U.S. Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. NUREG/CR-6903. 2007. Harding, A. L. and K. B. Byers. “Epidemiology of Laboratory-Associated Infections.” In Biological Safety: Principles and Practices, Third Edition, 35-56. Fleming, D. O. and D. L. Hunt, eds. Washington D.C.: ASM Press, 2000. Helmreich, R. L. “On Error Management: Lessons from Aviation.” British Medical Journal, vol. 320, no. 7237 (2000): 781-785. Helmreich, R.L., and A. C. Merritt. Culture at Work in Aviation and Medicine: National, Organizational, and Professional Influences. Brookfield VT: Ashgate Publishing, 1998. Kortepeter, M. G., J. W. Martin, J. M. Rusnak, T. J. Cieslak, K. L. Warfield, E. L. Anderson, and M. V. Ranadive. “Managing Potential Laboratory Exposure to Ebola Virus Using a Patient Biocontainment Care Unit.” Emerging Infectious Diseases. (2008). Lentzos, F. “Regulating Biorisk: Developing a Coherent Policy Logic (Part II).” Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, vol. 5, no. 1 (2007): 55-61. Lofstedt, R. “Good and Bad Examples of Siting and Building Biosafety Level 4 Laboratories: A Study of Winnipeg, Galveston and Etobicoke.” Journal of Hazardous Materials, 93 (2002): 47-66. Miller, D. and J. Forester. Aviation Safety Human Reliability Analysis Method (ASHRAM). Sandia National Laboratories. SAND2000-2955. 2000. Minnema, D. M. Improving Safety Culture: Recognizing the Underlying Assumptions. Powerpoint presentation for the ISM Workshop, Defense Nuclear Facilities Safety Board, 2007. National Academy of Public Administration for the Federal Aviation Administration. A Review of the Aviation Safety Reporting System: A Report. 1994. Newsletter of the European Biosafety Association. Biosafety Organisation in Spain. EBSA 2001 Newsletter, vol. 1, no. 3 (2001). Paradies, M., L. Unger, P. Haas, and M. Terranova. Development of the NRC’s Human Performance Investigation Process (HPIP). NUREG/CR- 5455. System Improvements, Inc. and Concord Associates, Inc.,1993. Patankar, M. S. and E. J. Sabin. Safety Culture Transformation in Technical Operations of the Air Traffic Organization: Project Report and Recommendations. St. Louis, Mo.: Saint Louis University, 2008. Patankar, M. S. A “Study of Safety Culture at an Aviation Organization.” International Journal of Applied Aviation Studies, vol. 3, no. 2 (2003): 243-258. Patankar, M. S., J. P. Brown, and M. D. Treadwell. Safety Ethics: Cases from Aviation, Healthcare, and Occupational and Environmental Health. Aldershot, U.K.: Ashgate Publishing, 2005. Patankar, M. S. and D. Driscoll. “Preliminary Analysis of Aviation Safety Action Programs in Aviation Maintenance.” Proceedings of the First Safety Across High-Consequence Industries Conference, St. Louis, Mo., 97-102. 2004. Patankar, M.S. and J. C. Taylor. Risk Management and Error Reduction in Aviation Maintenance. Aldershot, U.K.: Ashgate Publishing, 2004. Patankar, M. S., T. Bigda-Peyton, E. Sabin, J. Brown, and T. Kelly. A Comparative Review of Safety Cultures. St. Louis, Mo.: Saint Louis University, 2005. Peterson, L.K., E. H. Wight, and M.A. Caruso. “Evaluating Internal Stakeholder Perspectives on Risk-Informed Regulatory Practices for the Nuclear Regulatory Commission.” Paper presented at the WM ‘03 Conference, Tuscon Ariz., 2003. Pounds, J. and A. Isaac. Development of an FAA-EUROCONTROL Technique for the Analysis of Human Error in ATM. DOT/FAA/AM-02/12. Federal Aviation Administration, Office of Aerospace Medicine. 2002. Race, M. S. “Evaluation of the Public Review Process and Risk Communication at High-Level Biocontainment Laboratories.” Applied Biosafety, vol. 13, no. 1 (2008): 45-56. Race, M. S. and E. Hammond. “An Evaluation of the Role and Effectiveness of Institutional Biosafety Committees in Providing Oversight and Security at Biocontainment Labs.” Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, vol. 6, no. 1 (2008): 19-35. Reason, J. “Human Error: Models and Management.” British Medical Journal, vol. 320, no. 7237 (2000): 768-770. Rusnak, J. M., M.G. Kortepeter, R.J. Hawley, A.O. Anderson, E. Boudreau, and E. Eitzen. “Risk of Occupationally Acquired Illnesses from Biological Threat Agents in Unvaccinated Laboratory Workers.” Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, vol. 2, no. 4 (2004): 281- 93. Scarborough, A., L. Bailey and J. Pounds. Examining ATC Operational Errors Using the Human Factors Analysis and Classification System. DOT/FAA/AM-05/25. Federal Aviation Administration, Office of Aerospace Medicine. 2005. Schroeder, D., L. Bailey, J. Pounds, and C. Manning. A Human Factors Review of the Operational Error Literature. DOT/FAA/AM-06/21. Federal Aviation Administration, Office of Aerospace Medicine. 2006. Sexton, J. B., E. J. Thomas, and R. L. Helmreich. “Error, Stress and Teamwork in Medicine and Aviation: Cross-sectional Surveys.” British Medical Journal, vol. 320, no. 7237 (2000): 745-749. Shappell, S. and D. Wiegmann. Developing a Methodology for Assessing Safety Programs Targeting Human Error in Aviation. DOT/FAA/AM- 06/24. Federal Aviation Administration, Office of Aerospace Medicine. 2006. Shappell, S., C. Detwiler, K. Halcomb, C. Hackworth, A. Boquet, and D. Wiegmann. Human Error and Commercial Aviation Accidents: A Comprehensive, Fine-Grained Analysis Using HFACS. DOT/FAA/AM- 06/18. Federal Aviation Administration, Office of Aerospace Medicine. 2006. GAO. NASA: Better Mechanisms Needed for Sharing Lessons Learned. GAO-02-195. Washington, D.C.: January 30, 2002. U.S. Nuclear Regulatory Commission. Advisory Committee on Reactor Safeguards. Review and Evaluation of the Nuclear Regulatory Commission Safety Research Program. NUREG-1635, vol. 7. 2006. From the Individual Plant Examination of External Events (IPEEE) Program. NUREG-1742, vols. 1-2. 2002. Wedum, A. G. “Pipetting Hazards in the Special Virus Cancer Program.” Journal of the American Biological Safety Program, vol. 2, no. 2 (1997): 11-21. West, D.L., D. R. Twardzik, R. W. McKinney, W. E. Barkley, and A. Hellman. “Identification, Analysis, and Control of Biohazards in Viral Cancer Research.” In Laboratory Safety: Theory and Practice, 167-223. New York, N.Y.: Academic Press, 1980. Wiegmann, D. A. and S. A. Shappell. “Human Error Perspectives in Aviation.” International Journal of Aviation Psychology, vol. 11, no. 4 (2001): 341-357.
As the number of biological labs increases, so too do the safety risks for lab workers. Data on these risks--collected through a safety reporting system (SRS) from reports of hazards, incidents, and accidents--can support safety efforts. However, no such system exists for all biological labs, and a limited system--managed by the Centers for Disease Control and Prevention (CDC) and the Animal and Plant Health Inspection Service (APHIS)--applies to only a subset of these labs. While a national SRS has been proposed, design and implementation are complex. In this context, GAO was asked to identify lessons from (1) the literature and (2) case studies; and to apply those lessons to (3) assess CDC and APHIS's theft, loss, or release (TLR) system for select agents, such as anthrax, and (4) suggest design and implementation considerations for a labwide SRS. To do its work, GAO analyzed SRS literature; conducted case studies of SRSs in aviation, commercial nuclear, and health care industries; and interviewed agency officials and biosafety specialists. According to the literature, effective design and implementation of a safety reporting system (SRS) includes consideration of program goals and organizational culture to guide decisions in three key areas: (1) reporting and analysis, (2) reporter protection and incentives, and (3) feedback mechanisms. Program goals are best identified through stakeholder involvement and organizational culture, through assessment. Case studies of SRSs in three industries--aviation, commercial nuclear, and health care--indicate that (1) assessment, dedicated resources, and management focus are needed to understand and improve safety culture; (2) broad reporting thresholds, experience-driven classification schemes, and local-level processing are useful SRS features in industries new to safety reporting; (3) strong legal protections and incentives encourage reporting and prevent potential confidentiality breaches; and (4) a central, industry-level unit facilitates lesson sharing and evaluation. While the CDC and APHIS Select Agent Program (SAP) has taken steps in the three key areas to improve the usefulness of the TLR system for select agents, steps for improvement remain. Specifically, the agencies have taken steps to better define reportable events, ensure the confidentiality of reports, and dedicate resources to use TLR data for safety improvement. However, lessons from the literature and case studies suggest additional steps in the three key areas to enhance the usefulness of the system. For example, lowering reporting thresholds could provide precursor data and limited immunity could increase the incentive to report. Finally, the CDC and APHIS are in a unique position--as recognized authorities in the lab community and with access to TLR reports from across the industry--to guide SRS evaluation and ensure safety lessons are broadly disseminated. For a national safety reporting system for all biological labs, existing information--about labs' organizational culture and the lab community's limited experience with SRSs--suggests the following features in the three key areas: (1) Reporting and analysis. Reporting should be voluntary; available to all workers; cover hazards, incidents, and less serious accidents; accessible in various modes (Web and postal); and with formats that allow workers to report events in their own words to either an internal or external SRS system. (2) Reporter protections and incentives. Strong confidentiality protections, data deidentification processes, and other reporting incentives are needed to foster trust in reporting. (3) Feedback mechanisms. SRS data should be used at both the local and industry levels for safety improvement. An industry-level entity is needed to disseminate SRS data and to support evaluation. GAO recommends that, in developing legislation for a national SRS for biological labs, Congress consider provisions for certain system features. GAO also recommends three improvements to the CDC and APHIS TLR system. HHS disagreed with the first two recommendations and partially agreed with the third. USDA agreed with the three recommendations.
OPA establishes a “polluter pays” system that places the primary burden of liability for the costs of spills on the party responsible for the spill in return for financial limitations on that liability. Under this system, the responsible party assumes, up to a specified limit, the burden of paying for spill costs—which can include both removal costs (cleaning up the spill) and damage claims (restoring the environment and payment of compensation to parties that were economically harmed by the spill). Above the specified limit, the responsible party generally is no longer financially liable. Responsible parties are liable without limit, however, if the oil discharge is the result of gross negligence or willful misconduct, or a violation of federal operation, safety, and construction regulations. OPA’s “polluter pays” system is intended to provide a deterrent for responsible parties who could potentially spill oil by requiring that they assume the burden of responding to the spill, restoring natural resources, and compensating those damaged by the spill, up to the specified limit of liability. (See table 1 for the limits of liability for vessels and offshore facilities.) In general, liability limits under the OPA depend on the kind of vessel or facility from which a spill comes. For an offshore facility, liability is limited to all removal costs plus $75 million. For tank vessels, liability limits are based on the vessel’s tonnage and hull type. In both cases, certain circumstances, such as gross negligence, eliminate the caps on liability altogether. According to the Coast Guard, the leaking well in the current spill is an offshore facility. As noted earlier, pursuant to OPA, the liability limit for offshore facilities is all removal costs plus $75 million for damage claims. The Coast Guard also notes that liability for any spill on or above the surface of the water in this case would be between $65 million and $75 million. The range derives from a statutory division of liability for mobile offshore drilling units. For spills on or above the surface of the water, mobile offshore drilling units are treated first as tank vessels up to the limit of liability for tank vessels and then as offshore facilities. For example, if an offshore facility’s limit of liability is $75 million (not counting removal costs, for which there is unlimited liability for offshore facilities) and a spill resulted in $100 million in costs, the responsible party has to pay up to $75 million in damage claims—leaving $25 million in costs beyond the limit of liability. Under OPA, the authorized limit on federal expenditures for a response to a single spill is currently set at $1 billion, and natural resource damage assessments and claims may not exceed $500 million. OPA requires that responsible parties must demonstrate their ability to pay for oil spill response up to statutorily specified limits. Specifically, by statute, with few exceptions, offshore facilities that are used for exploring for, drilling for, producing, or transporting oil from facilities engaged in oil exploration, drilling, or production are required to have a certificate of financial responsibility that demonstrates their ability to pay for oil spill response up to statutorily specified limits. If the responsible party denies a claim or does not settle it within 90 days, a claimant may commence action in court against the responsible party, or present the claim to the NPFC. OPA also provides that the Fund can be used to pay for oil spill removal costs and damages when those responsible do not pay or cannot be located. This may occur when the source of the spill and, therefore, the responsible party is unknown, or when the responsible party does not have the ability to pay. In other cases, since the cost recovery can take a period of years, the responsible party may become bankrupt or dissolved. NPFC manages the Fund by disbursing funds for federal cleanup, monitoring the sources and uses of funds, adjudicating claims submitted to the Fund for payment, and pursuing reimbursement from the responsible party for costs and damages paid by the Fund. The Coast Guard is responsible for adjusting vessels’ limits of liability for significant increases in inflation and for making recommendations to Congress on whether other adjustments are necessary to help protect the Fund. DOI’s Minerals Management Service is responsible for adjusting limits of liability of offshore facilities. Response to large oil spills is typically a cooperative effort between the public and private sector, and there are numerous players who participate in responding to and paying for oil spills. To manage the response effort, the responsible party, the Coast Guard, EPA, and the pertinent state and local agencies form the unified command, which implements and manages the spill response. OPA defines the costs for which responsible parties are liable and the costs for which the Fund is made available for compensation in the event that the responsible party does not pay or is not identified. These costs, or “OPA compensable” costs, are of two main types: Removal costs: Removal costs are incurred by the federal government or any other entity taking approved action to respond to, contain, and clean up the spill. For example, removal costs include the equipment used in the response—skimmers to pull oil from the water, booms to contain the oil, planes for aerial observation—as well as salaries and travel and lodging costs for responders. Damages caused by the oil spill: Damages that can be compensated under OPA cover a wide range of both actual and potential adverse effects from an oil spill, for which a claim may be made to either the responsible party or the Fund. Claims include natural resource damage claims filed by trustees, claims for uncompensated removal costs and third-party damage claims for lost or damaged property and lost profits, among other things. The Fund has two major components—the Principal Fund and the Emergency Fund. The Principal Fund provides the funds for third-party and natural resource damage claims, limit of liability claims, reimbursement of government agencies’ removal costs, and provides for oil spill-related appropriations. A number of agencies—including the Coast Guard, EPA, and DOI—receive an annual appropriation from the Principal Fund to cover administrative, operational, personnel, and enforcement costs. To ensure rapid response to oil spills, OPA created an Emergency Fund that authorizes the President to spend $50 million each year to fund spill response and the initiation of natural resource damage assessments, which provide the basis for determining the natural resource restoration needs that address the public’s loss and use of natural resources as a result of a spill. Emergency funds not used in a fiscal year are carried over to the subsequent fiscal years and remain available until expended. To the extent that $50 million is inadequate, authority under the Maritime Transportation Security Act of 2002 grants authority to advance up to $100 million from the Fund to pay for removal activities. These emergency funds may be used for containing and removing oil from water and shorelines, preventing or minimizing a substantial threat of discharge, and monitoring the removal activities of the responsible party. NPFC officials told us in June 2010 that the emergency fund has received the advanced authority of $100 million for the Federal On-Scene Coordinator to respond to the spill and for federal trustees to initiate natural resource damage assessments along with an additional $50 million that had not been apportioned in 2006. Officials said they began using emergency funds at the beginning of May to pay for removal activities in the Gulf of Mexico. The Fund is financed primarily from a per-barrel tax on petroleum products either produced in the United States or imported from other countries. The balance of the Fund (including both the Principal and the Emergency Fund) has varied over the years (see fig. 1). The Fund’s balance generally declined from 1995 through 2006, and from fiscal year 2003 through 2007, its balance was less than the authorized limit on federal expenditures for the response to a single spill, which is currently set at $1 billion. This was in part because the Fund’s main source of revenue—a $0.05 per barrel tax on U.S. produced and imported oil—was not collected for most of the time from 1995 through 2006. However, the Energy Policy Act of 2005 reinstated the barrel tax beginning in April 2006. Subsequently, the Emergency Economic Stabilization Act of 2008 increased the tax rate to $0.08 per barrel through 2016. The balance in the Fund as of June 1, 2010, was about $1.6 billion. With the barrel tax once again in place, NPFC anticipates that the Fund will be able to cover potential noncatastrophic liabilities. In 2007 we reported several risks to the Fund, including the threat of a catastrophic spill. Although the Fund’s balance has increased, significant uncertainties remain regarding the impact of a catastrophic spill—such as the Deepwater Horizon—or —or multiple catastrophic spills on the Fund’s viability. multiple catastrophic spills on the Fund’s viability. Location, time of year, and type of oil are key factors affecting oil spill costs of noncatastrophic spills, according to industry experts, agency officials, and our analysis of spills. Given the magnitude of the current spill, however, the size of this spill will also be a factor that affects the costs. Officials also identified two other factors that may influence oil spill costs to a lesser extent—the effectiveness of the spill response and the level of public interest in a spill. In ways that are unique to each spill, these factors can affect the breadth and difficulty of the response effort or the extent of damage that requires mitigation. According to state officials with whom we spoke and industry experts, there are three primary characteristics of location that affect costs: Remoteness: For spills that occur in remote areas, spill response can be particularly difficult in terms of mobilizing responders and equipment, and they can complicate the logistics of removing oil from the water—all of which can increase the costs of a spill. Proximity to shore: There are also significant costs associated with spills that occur close to shore. Contamination of shoreline areas has a considerable bearing on the costs of spills as such spills can require manual labor to remove oil from the shoreline and sensitive habitats. The extent of damage is also affected by the specific shoreline location. Proximity to economic centers: Spills that occur in the proximity of economic centers can cost more when local services are disrupted. For example, a spill near a port can interrupt the flow of goods, necessitating an expeditious response in order to resume business activities, which could increase removal costs. Additionally, spills that disrupt economic activities can result in expensive third-party damage claims. The time of year in which a spill occurs can also affect spill costs—in particular, affecting local economies and response efforts. According to several state and private-sector officials with whom we spoke, spills that disrupt seasonal events that are critical for local economies can result in considerable expenses. For example, spills in the spring months in areas of the country that rely on revenue from tourism may incur additional removal costs in order to expedite spill cleanup, or because there are stricter standards for clean up, which increase the costs. The time of year in which a spill occurs also affects response efforts because of possible inclement weather conditions such as harsh winter storms and even hurricanes that can result in higher removal costs because of the increased difficulty in mobilizing equipment and personnel to respond to a spill in adverse conditions. The different types of oil can be grouped into four categories, each with its own set of effects on spill response and the environment. Lighter oils such as jet fuels, gasoline, and diesel fuel dissipate and evaporate quickly, and as such, often require minimal cleanup. However, these oils are highly toxic and can severely affect the environment if conditions for evaporation are unfavorable. For instance, in 1996, a tank barge that was carrying home-heating oil grounded in the middle of a storm near Point Judith, Rhode Island, spilling approximately 828,000 gallons of heating oil (light oil). Although this oil might dissipate quickly under normal circumstances, heavy wave conditions caused an estimated 80 percent of the release to mix with water, with only about 12 percent evaporating and 10 percent staying on the surface of the water . Natural resource damages alone were estimated at $18 million, due to the death of approximately 9 million lobsters, 27 million clams and crabs, and over 4 million fish. Heavier oils, such as crude oils and other heavy petroleum products, are less toxic than lighter oils but can also have severe environmental impacts. Medium and heavy oils do not evaporate much, even during favorable weather conditions, and can blanket structures they come in contact with—boats and fishing gear, for example—as well as the shoreline, creating severe environmental impacts to these areas, and harming waterfowl and fur-bearing mammals through coating and ingestion. Additionally, heavy oils can sink, creating prolonged contamination of the sea bed and tar balls that sink to the ocean floor and scatter along beaches. These spills can require intensive shoreline and structural clean up, which is time-consuming and expensive. For example, in 1995, a tanker spilled approximately 38,000 gallons of heavy fuel oil into the Gulf of Mexico when it collided with another tanker as it prepared to lighter its oil to another ship. Less than 1 percent (210 gallons) of the oil was recovered from the sea, and, as a result, recovery efforts on the beaches of Matagorda and South Padre Islands were labor intensive, as hundreds of workers had to manually pick up tar balls with shovels. The total removal costs for the spill were estimated at $7 million. In our 2007 report, we also reported that industry experts cited two other factors that also affect the costs incurred during a spill. Effectiveness of Spill Response: Some private-sector experts stated that the effectiveness of spill response can affect the cost of cleanup. The longer it takes to assemble and conduct the spill response, the more likely it is that the oil will move with changing tides and currents and affect a greater area, which can increase costs. Some experts said the level of experience of those involved in the incident command is critical to the effectiveness of spill response. For example, they said poor decision making during a spill response could lead to the deployment of unnecessary response equipment, or worse, not enough equipment to respond to a spill. Several experts expressed concern that Coast Guard officials are increasingly inexperienced in handling spill response, in part because the Coast Guard’s mission has been increased to include homeland security initiatives. Public interest: Several experts with whom we spoke stated that the level of public attention placed on a spill creates pressure on parties to take action and can increase costs. They also noted that the level of public interest can increase the standards of cleanliness expected, which may increase removal costs. The total costs of the Deepwater Horizon spill in the Gulf of Mexico are currently undetermined and will be unknown for some time even after the spill is fully contained. According to a press release from BP, as of June 7, 2010, the cost of the response amounted to about $1.25 billion, which includes the spill response, containment, relief well drilling, grants to the Gulf states, damage claims paid and federal costs. Of the $1.25 billion, approximately $122 million (as of June 1, 2010) has been paid from the Fund for the response operation, according to NPFC officials. The total costs will not likely be known for a while, as it can take many months or years to determine the full effect of a spill on natural resources and to determine the costs and extent of the natural resource damage. However, the spill has been described as the biggest U.S. offshore platform spill in 40 years, and possibly the most costly. Our work for this testimony did not include a thorough evaluation of the factors affecting the current spill. However, some of the same key factors that have influenced the cost of 51 major oil spills we reviewed in 2007 will likely have an effect on the costs in the Gulf Coast spill. For example, the spill occurred in the spring in an area of the country—the Gulf Coast—that relies heavily on revenue from tourism and the commercial fishing industry. Spills that occur in proximity of tourist destinations like beaches can result in additional removal costs in order to expedite spill cleanup, or because there are stricter standards for cleanup, which increase the costs. In addition, according to an expert, the loss in revenue from suspended commercial and recreational fishing in the Gulf Coast states is currently estimated at $144 million per year. Another factor affecting spills’ costs is the type of oil. The oil that continues to spill into the Gulf of Mexico is a light oil—specifically “light sweet crude” oil—that is toxic and can create long-term contamination of shorelines, and harm waterfowl and fur- bearing mammals. According to the U.S. Fish and Wildlife Service, many species of wildlife face grave risk from the spill, as well as 36 national wildlife refuges that may be affected. In recent testimony, the EPA Deputy Administrator described the Deepwater Horizon spill as a “massive and potentially unprecedented environmental disaster.” To date, the Fund has been able to cover costs from major spills that responsible parties have not paid, but risks and uncertainties remain. We reported in 2007 that the current liability limits for certain vessel types, notably tank barges, may have been disproportionately low relative to costs associated with such spills. In addition, the Fund faced other potential risks to its viability, including ongoing claims from existing spills and the potential for a catastrophic oil spill. The current spill in the Gulf of Mexico could result in a significant strain on the Fund, which currently has a balance of about $1.6 billion. The Fund has been able to cover costs from major spills that responsible parties have not paid, but additional focus on limits of liability is warranted. Limits of liability are the amount, under certain circumstances, above which responsible parties are no longer financially liable for spill removal costs and damage claims, in the absence of gross negligence or willful misconduct, or the violation of an applicable federal safety, construction, or operating regulation. If the responsible party’s costs exceed the limit of liability, the responsible party can make a claim against the Fund for the amount above the limit. Major oil spills that exceed a vessel’s limit of liability are infrequent, but their effect on the Fund can be significant. In our 2007 report, we reported that 10 of the 51 major oil spills that occurred from 1990 through 2006 resulted in limit-of-liability claims on the Fund. These limit-of-liability claims totaled more than $252 million and ranged from less than $1 million to more than $100 million. Limit-of- liability claims will continue to have a pronounced effect on the Fund. NPFC estimates that 74 percent of claims under adjudication that were outstanding as of January 2007 were for spills in which the limit of liability had been exceeded. The amount of these claims under adjudication was $217 million. In 2007, we identified two key areas in which further attention to these liability limits appeared warranted and made recommendations to the Commandant of the Coast Guard regarding both—the need to adjust limits periodically in the future to account for significant increases in inflation and the appropriateness of some current liability limits. Regarding the need to adjust liability limits to account for increases in inflation, we reported that the Fund was exposed to about $39 million in liability claims for the 51 major spills from 1990 through 2006 that could have been saved if the limits of liability had been adjusted for inflation as required by law, and recommended adjusting limits of liability for vessels every 3 years to reflect significant changes in inflation, as appropriate. Per requirements in OPA as amended by the Delaware River Protection Act, the Coast Guard published an interim rule in July 2009—made final in January 2010—that adjusted vessels’ limits of liability to reflect significant increases in the Consumer Price Index, noting that the inflation adjustments to the limits of liability are required by OPA to preserve the deterrent effect and polluter-pays principle embodied in the OPA liability provisions. DOI has been delegated responsibility by the President to adjust the liability limits for offshore facilities and this responsibility has been redelegated by DOI to the Minerals Management Service. To date, these liability limits have not been adjusted for inflation. The Coast Guard and Maritime Transportation Act of 2006 significantly increased the limits of liability. Both laws base the liability on a specified amount per gross ton of vessel volume, with different amounts for vessels that transport oil commodities (tankers and tank barges) than for vessels that carry oil as a fuel (such as cargo vessels, fishing vessels, and passenger ships). The 2006 act raised both the per-ton and the required minimum amounts, differentiating between vessels with a double hull, that helps prevent oil spills resulting from collision or grounding, and vessels without a double hull. For example, the liability limit for single-hull vessels larger than 3,000 gross tons was increased from the greater of $1,200 per gross ton or $10 million to the greater of $3,000 per gross ton or $22 million. However, our analysis of the 51 major spills showed that the average spill cost for some types of vessels, particularly tank barges, was higher than the limit of liability, including the new limits established in 2006. Thus, we recommended that the Commandant of the Coast Guard determine whether and how liability limits should be changed by vessel type, and make specific recommendations about these changes to Congress. In its August 2009 Annual Report to Congress on OPA liability limits, the Coast Guard had similar findings on the adequacy of some of the new limits. The Coast Guard found that 51 spills or substantial threats of a spill have resulted or are likely to result in removal costs and damages that exceed the liability limits amended in 2006. Specifically, the Coast Guard reported that liability limits for tank barges and cargo vessels with substantial fuel oil may not sufficiently account for the historic costs incurred by spills from these vessel types. The Coast Guard concluded that increasing liability limits for tank barges and non tank vessels—cargo, freight, and fishing vessels—over 300 gross tons would increase the Fund balance. With regard to making specific adjustments, the Coast Guard said dividing costs equally between the responsible parties and the Fund was a reasonable standard to apply in determining the adequacy of liability limits. However, the Coast Guard did not recommend explicit changes to achieve either that 50/50 standard or any other division of responsibility. The Fund also faces several other potential challenges that could affect its financial condition: Additional claims could be made on spills that have already been cleaned up: Natural resource damage claims can be made on the Fund for years after a spill has been cleaned up. The official natural resource damage assessment conducted by trustees can take years to complete, and once it is completed, claims can be submitted to the NPFC for up to 3 years thereafter. Costs and claims may occur on spills from previously sunken vessels that discharge oil in the future: Previously sunken vessels that are submerged and in threat of discharging oil represent an ongoing liability to the Fund. There are over 1000 sunken vessels that pose a threat of oil discharge. These potential spills are particularly problematic because in many cases there is no viable responsible party that would be liable for removal costs. Therefore, the full cost burden of oil spilled from these vessels would likely be paid by the Fund. Spills may occur without an identifiable source and, therefore, no responsible party: Mystery spills also have a sustained effect on the Fund, because costs for spills without an identifiable source—and therefore no responsible party—may be paid out of the Fund. Although mystery spills are a concern, the total cost to the Fund from mystery spills was lower than the costs of known vessel spills in 2001 through 2004. Additionally, none of the 51 major oil spills was the result of discharge from an unknown source. A catastrophic spill could strain the Fund’s resources: In 2007, we reported that since the 1989 Exxon Valdez spill, which was the impetus for authorizing the Fund’s usage, no oil spill has come close to matching its costs—estimated at $2.2 billion for cleanup costs alone, according to the vessel’s owner. However, as of early June, the response for the Deepwater Horizon spill had already totaled over $1 billion, according to BP, and to date, the spill has not been fully contained. As a result, the Gulf of Mexico spill could easily eclipse the Exxon Valdez, becoming the most costly offshore spill in U.S. history. The Fund is currently authorized to pay out a maximum of $1 billion on a single spill for response costs, with up to $500 million for natural resource damage claims. Although the Fund has been successful thus far in covering costs that responsible parties did not pay, it may not be sufficient to pay such costs for a spill—such as the Deepwater Horizon—that are likely to have catastrophic consequences. While BP has said it will pay all legitimate claims associated with the spill, should the company decide it will not or cannot pay for the costs exceeding their limit of liability, the Fund may have to bear these costs. Given the magnitude of the Deepwater Horizon spill, the costs could result in a significant strain on the Fund. Recently, several options have been identified to address the Fund’s vulnerabilities. In particular, the Congressional Research Service (CRS) has identified options to address the vulnerabilities, and Members of Congress have also introduced legislation that would address the risks to the Fund. These options include: Increasing liability limits. CRS proposes raising the liability caps for vessels so that the responsible party would be required to pay a greater share of the costs before the Fund is used. In addition, S. 3305 proposes raising the liability limit for damage claims related to offshore facilities from $75 million to $10 billion. Increasing the per-barrel tax. CRS and congressional options include increasing the current per-barrel tax used to generate revenue for the Fund in order to raise the Fund’s balance—H.R. 4213 proposes raising the tax from the current $0.08 per barrel to $0.34. According to CRS, this option would increase the likelihood that there is sufficient money available in the Fund if costs exceed the responsible party’s liability limits. Including oil owners as liable parties. CRS suggests expanding the definition of liable parties to include the owner of the oil being transported by a vessel. In addition, the Administration announced a proposal on May 12, 2010, that addresses several aspects of the response to the Deepwater Horizon spill, primarily by changing the way the Fund operates. It includes, among other things, proposals to increase the statutory limitation on expenditures from the Fund for a single oil spill response from $1 billion to $1.5 billion for spill response and from $500 million to $750 million per spill for natural resource damage assessments and claims. In addition, similar to the CRS and congressional proposals, the Administration is proposing an increase on the per-barrel tax to $0.09 this year, 7 years earlier than the current law requires. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For questions about this statement, contact Susan Fleming at (202) 512- 2834 or flemings@gao.gov. Individuals making key contributions to this testimony include Jeanette Franzel, Heather Halliwell, David Hooper, Hannah Laufe, Stephanie Purcell, Susan Ragland, Amy Rosewarne, Doris Yanger, and Susan Zimmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
On April 20, 2010, an explosion at the mobile offshore drilling unit Deepwater Horizon resulted in a massive oil spill in the Gulf of Mexico. The spill's total cost is unknown, but may result in considerable costs to the private sector, as well as federal, state, and local governments. The Oil Pollution Act of 1990 (OPA) set up a system that places the liability--up to specified limits--on the responsible party. The Oil Spill Liability Trust Fund (Fund), administered by the Coast Guard, pays for costs not paid for by the responsible party. GAO previously reported on the Fund and factors driving the cost of oil spills and is beginning work on the April 2010 spill. This testimony focuses on (1) how oil spills are paid for, (2) the factors that affect major oil spill costs, and (3) implications of major oil spill costs for the Fund. It is largely based on GAO's 2007 report, for which GAO analyzed oil spill cost data and reviewed documentation on the Fund's balance and vessels' limits of liability. To update the report, GAO obtained information from and interviewed Coast Guard officials. OPA places the primary burden of liability for the costs of oil spills on the responsible party in return for financial limitations on that liability. Thus, the responsible party assumes the primary burden of paying for spill costs--which can include both removal costs (cleaning up the spill) and damage claims (restoring the environment and compensating parties that were economically harmed). To pay both the costs above this limit and costs incurred when a responsible party does not pay or cannot be identified, OPA authorized use of the Fund, up to a $1 billion per spill, which is financed primarily from a per-barrel tax on petroleum products. The Fund also may be used to pay for natural resource damage assessments and to monitor the recovery activities of the responsible party, among other things. While the responsible party is largely paying for the current spill's cleanup, Coast Guard officials said that they began using the Fund--which currently has a balance of $1.6 billion--in May 2010 to pay for certain removal activities in the Gulf of Mexico. Several factors, including location, time of year, and type of oil, affect the cleanup costs of noncatastrophic spills. Although these factors will certainly affect the cost of the Gulf spill--which is unknown at this time--in this spill, additional factors such as the magnitude of the oil spill will impact costs. These factors can affect the breadth and difficulty of recovery and the extent of damage in the following ways: (1) Location. A remote location can increase the cost of a spill because of the additional expense involved in mounting a remote response. A spill that occurs close to shore can also become costly if it involves the use of manual labor to remove oil from sensitive shoreline habitat. (2) Time of year. A spill occurring during fishing or tourist season might carry additional economic damage, or a spill occurring during a stormy season might prove more expensive because it is more difficult to clean up than one occurring during a season with generally calmer weather. (3) Type of oil. Lighter oils such as gasoline or diesel fuels dissipate and evaporate quickly--requiring minimal cleanup--but are highly toxic and create severe environmental impacts. Heavier oils such as crude oil do not evaporate and, therefore, may require intensive structural and shoreline cleanup. Since the Fund was authorized in 1990, it has been able to cover costs not covered by responsible parties, but risks and uncertainties exist regarding the Fund's viability. For instance, the Fund is at risk from claims resulting from spills that significantly exceed responsible parties' liability limits. Of the 51 major oil spills GAO reviewed in 2007, the cleanup costs for 10 exceeded the liability limits, resulting in claims of about $252 million. In 2006, Congress increased liability limits, but for certain vessel types, the limits may still be low compared with the historic costs of cleaning up spills from those vessels. The Fund faces other potential risks as well, including ongoing claims from existing spills, claims related to sunken vessels that may begin to leak oil, and the threat of a catastrophic spill--such as the recent Gulf spill.
DOD relies on a number of individual processes and activities, known collectively as supply chain management, to purchase, produce, and deliver items and services to military forces. The department relies on working capital (revolving) funds maintained by the defense and service logistics agencies to finance the flow of these items to the forces. Working capital funds allow these agencies to purchase needed items from suppliers. Military units then order items from the logistics agencies and pay for them with annually appropriated operations and maintenance funds when the requested items—either from inventory or manufacturers—are delivered to the units. The Under Secretary of Defense (Acquisition, Technology, and Logistics) has been designated by the Secretary of Defense as the department’s Defense Logistics Executive, with authority to address logistics and supply chain issues. Officials within the Office of the Assistant Deputy Under Secretary of Defense for Supply Chain Integration completed the first iteration of the plan in July 2005 and have updated it several times since then based on information provided by designated lead proponents for the individual initiatives. DOD has shared its plan externally with Congress, OMB, and our office. OMB has characterized the plan as a model for other federal agencies to use in developing their own plans to address their high- risk areas. The plan has three focus areas: requirements forecasting, asset visibility, and materiel distribution—issues that we have identified based on GAO audits since 1995 as critical to improving DOD supply chain management. Accurately forecasted supply requirements are a key first step in buying, storing, positioning, and shipping items that the warfighter needs. DOD describes asset visibility as the ability to provide timely and accurate information on the location, quantity, condition, movement, and status of supplies and the ability to act on this information. Distribution is the process for synchronizing all elements of the logistics system to deliver the “right things” to the “right place” at the “right time” to support the warfighter. Our prior work has identified problems in these three focus areas, as well as other aspects of supply chain management. DOD’s plan identifies joint theater logistics as an initiative that will improve both asset visibility and materiel distribution. Joint theater logistics is intended to enhance the ability of a joint force commander to direct various logistics functions, including distribution and supply support activities, across the theater and, for several years, has been part of DOD’s planned transformation of logistics capabilities. Joint theater logistics is one of seven future logistics capabilities that DOD has grouped under “focused logistics.” DOD has broadly defined joint theater logistics as an adaptive ability to anticipate and respond to emerging theater logistics and support requirements. In general, when legislative and agency actions result in significant and sustainable progress toward resolving a high-risk problem, we remove the high-risk designation. Key determinants include a demonstrated strong commitment to and top leadership support for addressing problems, the capacity to do so, a corrective action plan, and demonstrated progress in implementing corrective measures. From 1990 through 2007, we removed 18 areas from the high-risk list. Our decisions on removing supply chain management from the high-risk list will be guided by whether DOD (1) sustains top leadership commitment and long-term institutional support for the plan; (2) obtains necessary resource commitments from the military services, the Defense Logistics Agency, and other organizations; (3) makes substantial progress implementing improvement initiatives across the department; (4) establishes a program to demonstrate progress and validate the effectiveness of the initiatives; and (5) completes the development of a comprehensive, integrated strategy for guiding supply chain management improvement efforts across the department. The most recent update to the plan in May 2007 shows that DOD, over the past year, has made progress in developing and implementing its improvement initiatives. We noted this progress in the January 2007 update of our high-risk series. Specific examples of progress made include the following: DOD has established joint deployment distribution operations centers in each geographic combatant command. In early 2004, DOD established the first of these operations centers in Kuwait, under U.S. Central Command, after distribution problems arose during the initial stages of Operation Iraqi Freedom. DOD has since expanded this organization to its other geographic combatant commands. These operations centers can help joint force commanders synchronize the arrival of supplies into a theater and assist in other aspects of distribution and supply support. They are designed to incorporate representatives from DOD components, such as U.S. Transportation Command, the Defense Logistics Agency, and the military services, who can provide a knowledgeable connection to logistics supply centers in the United States and facilitate the distribution of supplies to the theater. The expansion of these operations centers to all the geographic commands was based on the success of the first operations center in Kuwait, which has been credited with improving the management of supplies moving across the distribution system and achieving cost savings. DOD has reported initial success with an initiative aimed at streamlining the storage and distribution of common items for multiple military service locations through the use of Defense Logistics Agency hubs. The objectives of this initiative, called joint regional inventory and material management, include eliminating duplicate materiel handling and inventory layers. DOD has met key milestones in this initiative and recently completed the pilot program in Hawaii. U.S. Pacific Command officials stated that they had reduced redundant service-managed inventories, the number of times they handle parts, and customer wait times over the course of the pilot. They estimated that the services had reduced their inventory levels by more than $10 million. In March 2007, the Defense Logistics Agency was tasked to be the lead proponent for continued worldwide implementation of joint regional inventory and material management. DOD also made progress toward improving transportation management of military freight. Before the end of this fiscal year, U.S. Transportation Command plans to award a contract to a third-party logistics provider, or 3PL, to coordinate the movement of freight shipments within the continental United States. This effort, called the defense transportation coordination initiative, is aimed at improving the reliability, predictability, and efficiency of moving freight among DOD’s depots, logistics centers, and field activities. In a recent report on this initiative, we stated that DOD had taken numerous actions to incorporate the lessons learned from a prior prototype program and, moreover, had taken positive steps to adopt best practices employed by other public and private organizations to transform their culture. Still, the long-term success of this effort remains uncertain given the challenges in undertaking organizational transformation and because the program is still in its early stages. Despite the progress indicated by the development and implementation of these initiatives, the recent update of DOD’s plan indicates some delays in achieving certain milestones. For example, the radio frequency identification (RFID) initiative experienced a slippage from December 2006 to September 2007 in its milestone to implement passive RFID at the first 25 percent of Defense Logistics Agency’s distribution centers located outside the continental United States. This milestone was adjusted based on lessons learned from the implementation of RFID at sites within the continental United States. Also, the item unique identification initiative experienced a slippage of a year, from January 2007 to January 2008, for the milestone on demonstrating integration with international entities, because required ratification from the North Atlantic Treaty Organization was delayed. Schedule delays such as these may be expected given the long-standing nature of the problems being addressed, the complexities of the initiatives, and the involvement of multiple organizations. Furthermore, some of these initiatives are in the early stages of implementation, with full implementation several years away. The long- term time frames for many of these initiatives present challenges to the department in sustaining progress toward substantially completing their implementation. Since the last hearing before this Subcommittee in July 2006, we have not seen significant changes in how DOD proposes to measure the impact of its initiatives in its plan. The plan, as before, contains four performance metrics—backorders, customer wait time, on-time orders, and logistics response time. While these four measures capture broad aspects of DOD’s supply chain performance, they can be affected by variables other than the initiatives themselves. For example, natural disasters, wartime surges in requirements, or disruption in the distribution process could each result in increased backorders, longer customer wait time, fewer on- time orders, and slowed response time, regardless of DOD’s initiatives. Consequently, changes in these high-level metrics might not be directly attributable to the initiatives in the plan. While it may take years before the results of programs become apparent, intermediate metrics can be used to provide information on interim results and show progress toward intended results. In addition, when program results could be influenced by external factors, intermediate metrics can be used to identify the program’s discrete contribution to the specific result. As we noted last July, the results of DOD’s initiatives would be more apparent if DOD applied more outcome-oriented performance metrics for many of the individual initiatives and for the three focus areas. Outcome- oriented performance metrics show results or outcomes related to an initiative or program in terms of effectiveness, efficiency, impact, or all of these. Since last July, DOD has not added new outcome-focused performance metrics to its plan. DOD also continues to lack cost metrics that might show efficiencies gained through these supply chain efforts, either at the initiative level or overall. In total, DOD’s plan identifies a need to develop outcome-focused performance metrics for 6 initiatives, and 9 out of 10 initiatives lack cost metrics. We recommended in January that DOD develop, implement, and monitor outcome-focused performance and cost metrics for all the individual initiatives in the plan as well as for the plan’s focus areas of requirements forecasting, asset visibility, and materiel distribution. In response to our recommendation, DOD asserted that it had developed and implemented outcome-focused performance and cost metrics for logistics across the department, but it also acknowledged that more work needed to be done to link the outcome metrics to the initiatives in the plan as well as for the focus areas. DOD stated that these linkages will be completed as part of full implementation of each initiative. Our recent work has identified continued systemic weakness in aspects of DOD’s supply chain management. I will briefly highlight some of the results from these reviews, structured around the three focus areas covered by DOD’s plan. In the area of requirements forecasting, the military services are experiencing difficulties estimating acquisition lead times to acquire spare parts for equipment and weapon systems. Effective processes that identify and manage acquisition lead times are of critical importance to maintaining cost-effective inventories, budgeting, and having materiel available when it is needed. In March 2007, we reported that 44 percent of the services’ lead time estimates varied either earlier or later than the actual lead times by at least 90 days. Overestimates and underestimates of acquisition lead time contribute to inefficient use of funds and potential shortages or excesses of spare parts. We recommended a number of actions DOD should take to improve the accuracy and strengthen the management of lead times. For example, we made specific recommendations directed toward the Army, the Air Force, the Navy, and the Defense Logistics Agency with the intent of improving their accuracy in setting acquisition lead times. DOD mostly concurred with our recommendations. In a separate review of the Air Force’s inventory management practices, we found continuing problems hindering its ability to efficiently and effectively maintain its spare parts inventory for military equipment. From fiscal years 2002 through 2005, more than half of the Air Force’s secondary inventory (spare parts), worth an average of $31.4 billion annually, was not needed to support required on-order and on-hand inventory levels. We found an average of 52 percent ($1.3 billion) of the Air Force’s secondary on-order inventory was not needed to support on-order requirements. This unneeded on-order inventory indicates that the Air Force did not cancel orders or deobligate funds for items that were not needed to support requirements. When the Air Force buys unneeded items, it is obligating funds unnecessarily, which could lead to not having sufficient funds to purchase needed items. The Air Force has continued to purchase unneeded inventory because its policies do not provide incentives—such as requiring contract termination review for all unneeded on-order inventory or reducing the funding available for the Air Force Materiel Command by an amount up to the value of the Air Force’s on-order inventory that is not needed to support requirements—to reduce the amount of inventory on order that is not needed to support requirements. In addition, although the percentage of the Air Force’s on-hand inventory was reduced by 2.7 percent during these years, about 65 percent ($18.7 billion) of this inventory was not needed to support required inventory levels. We calculated that it costs the Air Force from $15 million to $30 million annually to store its unneeded items. We recommended that the Air Force improve its policies regarding on-order inventory, revalidate the need to retain items that are not needed to meet inventory requirements and for which there is no recurring demand, and take other actions to improve accountability for, and management of, its secondary inventory. DOD generally concurred with our recommendations. Another area of continuing concern has been the stocks maintained in the Army’s prepositioning programs. Prepositioning is one of three ways, along with airlift and sealift, that the U.S. military can deliver equipment and supplies to field combat-ready forces. The Army drew heavily from its prepositioned stocks to support Operations Iraqi Freedom and Enduring Freedom, and these sustained operations have taken a toll on the condition and readiness of military equipment. In February 2007, we reported the Army was changing its overall prepositioning strategy and, in doing so, faced major strategic and management challenges. One of these challenges was that despite recent efforts to improve requirements setting, the Army had not yet determined reliable requirements for secondary items and operational project stocks. Also, the Army does not systematically measure or report readiness for the secondary item and operational project programs. Without sound requirements or reporting mechanisms, the Army cannot reliably assess the impact of any shortfalls, determine the readiness of its programs, or make informed investment decisions about them. We recommended that the Army develop an implementation plan that, among other things, completes ongoing reevaluation of the secondary item and project stock requirements as well as establishes systematic readiness measurement and reporting of secondary items and operational project stock programs. DOD concurred with this recommendation. Despite the benefits attributed to the joint deployment distribution operations center in Kuwait, effective management of supply distribution across the theater has been hindered by ongoing problems in achieving asset visibility. Senior military commanders in Kuwait attributed these problems to a lack of interoperability among information technology systems that makes it difficult to obtain timely, accurate information on assets in the theater. We have previously reported that the defense logistics systems used by various components to order, track, and account for supplies are not well integrated and do not provide the information needed to effectively manage theater distribution and provide asset visibility. Officials told us their staff must use manual work-arounds to overcome the problems caused by noninteroperable information systems and estimated that their staff spend half their time pulling data from information systems, e-mailing it around for validation or coordination, consolidating it on a spreadsheet, and then analyzing it to make management decisions. In January 2007, a joint assessment conducted by several DOD components at Camp Arifjan, Kuwait, found that separate movement control battalions in Kuwait and Iraq use both automated and handwritten transportation movement requests to track air and ground movements and must consolidate manual and automated data into spreadsheets in order to capture the total theater movement picture. Neither movement battalion has total visibility over what is occurring in both Kuwait and Iraq nor do they have total visibility of the surface transportation resources necessary to optimize the distribution of resources. In our review of joint theater logistics, we also found continuing problems with container management that hinder asset visibility and impede DOD’s ability to effectively manage logistics operations and costs, although improvements had been made since we last reported on this issue in 2003. Some challenges that DOD faces with container management include the application of RFID on containers in the supply chain, compliance with container management processes, and the return of commercial containers to maritime carriers. In 2004, the Under Secretary of Defense (Acquisition, Technology, and Logistics) directed the use of active RFID on all consolidated shipments moving to, from, or between overseas locations in order to provide global in-transit visibility, and U.S. Central Command has emphasized the need to use this technology to improve asset visibility in Iraq and Afghanistan. However, according to U.S Central Command officials, DOD continues to struggle with the application of RFID in the theater supply chain because of problems such as containers shipped without RFID tags or with tags that are broken, tags with incorrect information, or tags that are rewritten but not cross-referenced to the original shipping information. Noncompliance with container management processes established by U.S. Central Command can also limit asset visibility. For example, the Army’s system has not been able to effectively track containers as they pass through distribution channels, significantly hampering asset visibility in theater because tagged containers can become “lost” in theater, with no one able to track the location of the container or its contents. In addition, if the container is commercially owned and not returned to the carrier within a specified time period, detention charges begin accumulating. During our review of joint theater logistics we also found that U.S. Transportation Command and the Military Surface Deployment and Distribution Command, to improve management and accountability over containers and to address the growing detention charges, developed a theater container management process and established the container management element—a unit responsible for tracking and providing management oversight of containers in the theater. In addition, the Army decided to purchase, or “buy out,” commercial containers to reduce monthly detention charges. Container management element officials told us that through a combination of container buyouts and increased oversight, detention charges decreased from approximately $10.7 million per month in December 2005 to $3.7 million per month in October 2006. However, although DOD has been able to reduce monthly detention charges on commercial containers, it is still experiencing problems with retaining visibility over containers, and its problem with commercial container detention charges is shifting from Iraq to Afghanistan. In addition, the Army continues to experience problems in developing and implementing system initiatives affecting asset visibility. For example, the Logistics Management Program, one of the Army’s major business system modernization efforts intended to manage its inventory and depot maintenance operations, has continued to experience problems with accurately recognizing revenue and billing customers, and the accuracy of its financial reports continues to be questionable. If information contained in asset accountability systems is not accurate, complete, and timely, DOD’s day-to-day operations could be adversely affected. As of September 30, 2006, the Army reported that approximately $452 million had been obligated for this system effort and estimates that it will invest at least another $895 million in this program. Also, its schedule to reach full operational capability has slipped from fiscal year 2005 to fiscal year 2010. We have recently reviewed the Army’s progress in achieving asset visibility and expect to issue our report by the end of this month. In our review of joint theater logistics, we found that DOD components have made progress developing and implementing joint theater logistics initiatives in the areas of distribution and supply support; however, the department faces a number of challenges that hinder its ability to fully realize the benefits of these efforts. Unless DOD successfully addresses these challenges, the initiatives are not likely to significantly improve the ability of a joint force commander to harness the diffuse logistics resources and systems that exist within the department and effectively and efficiently direct logistics functions, including distribution and supply support activities, across the theater of operations to accomplish an assigned mission. For example, initiatives to improve the coordination of surface transportation assets—mainly trucks—in a theater of operations face challenges such as potential duplication of responsibilities, the unavailability of information technology tools, and unclear lines of command and control. According to a 2005 RAND Corporation study, during the initial phase of Operation Iraqi Freedom there was no single organization deployed in theater with the authority to rebalance transportation assets across the theater and integrate and synchronize the surface deployment and distribution movements of materiel in support of the commander’s priorities. As part of its modular transformation, the Army is creating theater and expeditionary sustainment commands that are aimed in part at centralizing control over Army surface transportation assets within a theater of operations. In a separate initiative, U.S. Transportation Command created a new organization, the director of mobility forces-surface, to integrate surface deployment and distribution priorities set by the joint force commander. Army officials raised concerns about whether the theater and expeditionary sustainment commands would have the information technology tools and personnel necessary to effectively and efficiently carry out their missions. They said that these commands were designed to be smaller than their predecessors, based on an assumption that certain information technology tools would be available to enable the commands to operate with fewer personnel. However, some of these information technology tools—such as the next generation Mobile Tracking System, Battle Command Sustainment Support System, and Transportation Coordinator’s Automated Information for Movements System II—have experienced problems during their development that have limited their capability or have delayed their fielding. According to Army officials, the shortcomings in available information tools have resulted in the need for additional staff in the theater and expeditionary sustainment commands and have required the commands to use manual, ad hoc techniques, which are cumbersome and manpower intensive, to validate, coordinate, and analyze data for decision making. The U.S. Transportation Command-led efforts to establish the director of mobility forces-surface have also faced implementation challenges. The initial assessment of the director of mobility forces-surface pilot in Kuwait by U.S. Transportation Command and U.S. Central Command indicated that the initiative faces a number of challenges related to command and control, availability of information technology tools, securing personnel with the expertise and knowledge to use the information technology tools that are available, and potential duplication of responsibilities with other Army organizations. U.S. Central Command discontinued the pilot in May 2007 until some of these issues were resolved. In addition, the Army reviewed more than 100 proposed responsibilities of the director of mobility forces-surface and found that most of these responsibilities are already covered by the Army’s theater and expeditionary sustainment commands or other commands. DOD also has developed initiatives to consolidate and improve storage and shipping of materiel, including node management and deployable depot, joint regional inventory and material management, and theater consolidation and shipping point, but such efforts have been implemented on a limited scale. During our visits to Kuwait, we found that the Defense Logistics Agency and the Army were operating separate facilities that have the potential for consolidation, which could result in more efficient use of resources. We discussed this issue with senior U.S. military officials in Kuwait and with Defense Logistics Agency officials. Following these discussions and the completion of our fieldwork, the Defense Logistics Agency assessed ways to improve theater distribution and made recommendations to consolidate and relocate existing operations. Specifically, in April 2007, the Defense Logistics Agency study team recommended terminating the theater consolidation and shipping point contract, assuming these functions at the defense distribution depot, and drawing down inventory and operations at the Army general support warehouse at Camp Arifjan. Finally, various options have emerged for improving the ability of a joint force commander to exercise command and control over joint theater logistics functions. U.S. Joint Forces Command is coordinating the joint experimental deployment and support initiative, whose objective is to experiment with a range of command and control options that can provide logistics coordination, integration, and synchronization to meet the combatant commander’s priorities. The initiative builds upon DOD’s joint deployment distribution operations center concept and progresses along a continuum to include more robust organizational options. However, the military services have raised concerns about how their own roles and responsibilities for providing logistics support might be affected and have opposed expansion of the most robust command and control option that has emerged—known as the joint force support component command. Our discussions with officials from the combatant commands and the military services indicated that there are unresolved issues related to exercising joint command and control over logistics functions in a theater of operations. A number of officials had concerns about how organizations such as the joint force support component command would be staffed and what roles and authorities it would have. Specifically, they mentioned statutory requirements for logistics support, directive authority for logistics, and operational and financial considerations. The services expressed concerns about mandating that they provide staff to the joint force support component command, while also fulfilling their Title 10 responsibilities to man, train, and equip their forces. Officials from military service components in the geographic combatant commands raised the issue of having a service component take direction from a separate component command at the same level, rather than from a higher-level command, and they were resistant to losing personnel to such an organization because the service component commands still have tactical logistics responsibilities to fulfill. Some military service officials raised questions about the effectiveness of a joint force support component command that lacked an ability to exercise directive authority for logistics. This authority gives the combatant commander the ability to shift logistics resources within the theater in order to accomplish a mission. Officials we interviewed did not believe this authority could be delegated below the level of a joint force commander or service component commander to an entity such as the joint force support component command. Thus, they questioned how the joint force support component command differs from other logistics command and control organizations if the organization can make recommendations to the joint force commander but not actually direct the transfer of assets across the service components, known as cross-leveling. Readiness and financial considerations related to exercising directive authority for logistics include the military operational risks and trade-offs associated with cross- leveling. Assets diverted from one unit to support another unit may affect the giving organization’s ability to conduct a future operation, and officials raised concerns that logisticians in a separate logistics command may not fully understand the impact of cross-leveling on the next military mission. Additionally, because the services obtain funding for their own assets, several officials told us that some form of financial reconciliation must be considered when exercising directive authority for logistics. DOD spends billions of dollars to sustain key business operations intended to support the warfighter, including systems and processes related to the supply chain and other business areas. We have reported on inefficiencies in DOD’s business operations, such as the lack of sustained leadership and a comprehensive, integrated, and enterprisewide business plan. Moreover, at a time of increasing military operations and growing fiscal constraints, billions of dollars have been wasted annually because of the lack of adequate transparency and appropriate accountability across DOD’s business areas. As we have previously stated, progress in DOD’s overall approach to business transformation is needed to confront problems in other high-risk areas, including supply chain management. Because of the complexity and long-term nature of business transformation, we have stated that DOD needs a Chief Management Officer with significant authority, experience, and a term that would provide sustained leadership and the time to integrate DOD’s overall business transformation efforts. Without formally designating responsibility and accountability for results, reconciling competing priorities among various organizations and prioritizing investments will be difficult and could impede the department’s progress in addressing deficiencies in key business areas. Based on our long- standing body of work, pending legislative language, and the results of studies completed by the Defense Business Board and the Institute for Defense Analysis, there is a clear consensus that the department needs a Chief Management Officer and that the status quo is no longer acceptable. The two other DOD high-risk areas that are most closely linked with supply chain management are modernizing business systems and improving financial management. Successful resolution of supply chain management problems will require investment in needed information technology. The DOD systems environment that supports these operations is overly complex and error prone, and is characterized by little standardization across the department, multiple systems performing the same tasks, the same data stored in multiple systems, and the need for data to be entered manually into multiple systems. Modernized business systems are essential to the department’s effort to address its supply chain management issues. In its plan, DOD recognizes that achieving success in supply chain management depends on developing interoperable systems that can share critical supply data. One of the initiatives included in the plan is business system modernization, an effort that is being led by DOD’s Business Transformation Agency and that includes achieving materiel visibility through systems modernization as an enterprisewide priority. Regarding financial management, we have repeatedly reported that weaknesses in business management systems, processes, and internal controls not only adversely affect the reliability of reported financial data, but also the management of DOD operations. Such weaknesses have adversely affected the ability of DOD to control costs, ensure basic accountability, anticipate future costs and claims on the budget, measure performance, maintain funds control, and prevent fraud. In 2005, DOD issued its Financial Improvement and Audit Readiness Plan, which is intended to provide DOD components with a road map for resolving problems affecting the accuracy, reliability, and timelines of financial information and obtaining clean financial statement audit opinions. However, tangible evidence of improvements in financial management remains limited, and DOD recognizes that it will take several years to implement the systems, processes, and other improvements needed to address its financial management challenges. Our recent review of joint theater logistics raises concerns about whether DOD can effectively implement this initiative without reexamining fundamental aspects of the department’s logistics governance and strategy. In this respect, joint theater logistics may serve as a microcosm of some of the challenges DOD faces in resolving supply chain management problems. We found that DOD has not developed a coordinated and comprehensive management approach to guide and oversee implementation of joint theater logistics across the department. Efforts to develop and implement joint theater logistics initiatives have been fragmented among various DOD components largely because of a lack of specific goals and strategies, accountability for achieving results, and outcome-oriented performance measures—key principles of sound management. While DOD has broadly defined joint theater logistics as an adaptive ability to anticipate and respond to emerging theater logistics and support requirements, it has not developed specific goals and strategies linked to this vision. In addition, DOD has not assigned accountability for achieving results under joint theater logistics and has not developed outcome-oriented performance measures that would enable the department to know whether its efforts are fully and effectively achieving a joint theater logistics capability. Without a coordinated and comprehensive approach to managing joint theater logistics, DOD lacks assurance that it is on the right path toward achieving this capability or that individual initiatives will collectively address gaps in logistics capabilities. Further, DOD will have difficulty achieving the desired improvements in distribution and asset visibility associated with joint theater logistics as portrayed in the plan. Based on our review, we recommended that DOD develop and implement a coordinated and comprehensive management approach to guide and oversee efforts across the department to improve distribution and supply support for the U.S. forces in a joint theater. This approach should encompass sound management principles, including developing specific strategies and goals, assigning accountability for achieving results, and using outcome-oriented performance measures. Moreover, in that report we recommended that DOD align its approach to joint theater logistics with ongoing actions the department is taking to reform its logistics governance and strategy, which are discussed below. In considering options for implementing this recommendation, we stated that DOD should determine whether any changes should be made to DOD’s organizational structure and control of resources for joint logistics support, and identify the steps needed to make these changes, including changes to existing laws, such as Title 10. DOD concurred with our recommendation. Regarding logistics governance, DOD has been testing a new approach to managing joint capabilities as a portfolio. In September 2006, the Deputy Secretary of Defense selected joint logistics as one of four capability areas for testing capabilities portfolio management. These experiments were initiated in response to the 2006 Quadrennial Defense Review, which emphasized DOD’s need to build on capabilities-based planning and management. According to DOD officials, the purpose of this test is to determine if DOD can make better leadership decisions by managing a portfolio of capabilities instead of managing systems and capabilities individually. Thus, this portfolio test is intended to enable senior leaders to consider trade-offs across previously stovepiped areas and to better understand the implications of investment decisions across competing priorities. Specifically in the joint logistics area, the portfolio includes all capabilities required to project and sustain joint force operations, including supply chain operations. While DOD officials told us the initial results of the test have been completed and have shown that portfolio management is an effective means for managing capabilities, they said that decisions had not yet been made on how to implement this new governance approach. The decisions DOD makes on capabilities portfolio management will also influence the development of its logistics strategy. In our prior work, we have noted that DOD has undertaken various efforts over the years to identify, and plan for, future logistics needs, but it has lacked an overarching, consistent logistics strategy. Last year, the department began to develop a “to be” road map to guide future logistics programs and initiatives. DOD officials described the “to be” road map as portraying where the department is headed in the logistics area and how it will get there; monitoring progress toward achieving its objectives; and institutionalizing a continuous assessment process that links ongoing capability development, program reviews, and budgeting. According to DOD officials, the initiatives in the plan will be incorporated into the “to be” road map. At this time last year, the first edition of the “to be” road map was scheduled for completion in February 2007, in conjunction with the submission of the President’s Budget for Fiscal Year 2008, with annual updates planned. However, DOD subsequently put the “to be” road map on hold pending the completion of the capabilities portfolio management test. DOD officials have told us that the “to be” road map is now scheduled to be completed in summer 2008. In January, we recommended that DOD improve its ability to guide logistics programs and initiatives across the department and to demonstrate the effectiveness, efficiency, and impact of its efforts to resolve supply chain management problems by completing the development of a comprehensive, integrated logistics strategy that is aligned with other defense business transformation efforts. DOD concurred with our recommendation. In reviewing DOD’s approach to developing and implementing joint theater logistics initiatives, we found that the diffused organization of DOD’s logistics operations, including separate funding and management of resources and systems, complicates DOD’s ability to adopt a coordinated and comprehensive approach. Several recent studies of DOD logistics system have reached similar conclusions. Since 2003, a number of studies have recommended changes to DOD’s organizational structure for providing joint logistics and supply support to military operations. Some of these organizations have noted that control over resources is a critical issue to be addressed. For example, the Defense Science Board recommended creation of a joint logistics command that would combine the missions of U.S. Transportation Command, the Defense Logistics Agency, and service logistics commands. The Center for Strategic and International Studies also suggested the creation of a departmentwide logistics command responsible for end-to-end supply chain operations. Regarding resource allocation, this study further stated that resources should be organized, managed, and budgeted largely along military service lines, but in those instances where joint capability needs are not being met by the services, the Secretary must turn to joint processes and entities. The Lexington Institute, which also recommended creation of a U.S. logistics command at the four-star level, concluded that Title 10 may need to be amended in order to create this command. The Lexington Institute also concluded that existing funding mechanisms act as disincentives for joint logistics transformation and interoperability. The Defense Business Practice Implementation Board, while not agreeing with the idea of combining U.S. Transportation Command and the Defense Logistics Agency, recommended that DOD elevate leadership for supply chain integration by designating a new under secretary of defense who would have authority to direct integration activities, including control over budget decisions affecting these two components and the military services. While we noted that transformational changes such as those proposed by these organizations may not be possible without amending existing laws, the scope of our joint theater logistics review did not include an assessment of these proposals or what changes, if any, would require congressional action. Also contributing to coordination problems in the area of supply chain management have been difficulties in clearly defining the responsibilities and authorities of defense components that have a role in supply chain operations. For example, although the Secretary of Defense in 2003 designated the Commander, U.S. Transportation Command, as DOD’s distribution process owner—with responsibilities for overseeing the overall effectiveness, efficiency, and alignment of DOD-wide distribution activities—DOD has yet to issue a directive defining the process owner’s authority, accountability, resources, and responsibility. We have recommended that DOD enhance its ability to take a more coordinated approach to improving the supply distribution system by, among other things, clarifying the scope of responsibilities, accountability, and authority between the distribution process owner and other organizations. Although DOD did not concur with this recommendation at the time we issued our report in 2005, DOD officials have recently told us they plan to issue a directive aimed at more clearly defining the role of the distribution process owner. Until this directive is issued, the responsibilities and authorities of the distribution process owner remain unclear. Echoing this theme, the Defense Business Board in April 2007 recommended that DOD take steps to clearly identify decision-making authority regarding supply chain integration. Specifically, the Defense Business Board recommended that DOD define and communicate enterprise goals in order to align initiatives; clearly define responsibilities and authorities of all players in the supply and distribution processes; and allocate responsibility and authority to set direction and oversee progress, and make necessary decisions to carry out DOD’s agreed-upon supply chain management strategy and achieve enterprise goals. DOD, like much of the federal government, will face critical challenges during the 21st century that will test fundamental notions about how agencies and departments should be organized and aligned to carry out their missions. For example, the department faces challenges in accomplishing its transformation goals and making improvements in key business areas such as supply chain management. We have suggested that decision makers may need to reexamine fundamental aspects of DOD’s programs by considering issues such as whether current organizations are aligned and empowered to meet the demands of the new security environment as efficiently as possible and what kinds of economies of scale and improvements in delivery of support services would result from combining, realigning, or otherwise changing selected support functions, including logistics. Between now and the next update of our high-risk series in 2009, we plan to continue to assess DOD’s progress in resolving supply chain management problems against the criteria we have established for removing a high-risk designation. In addition to monitoring the progress of DOD’s plan, we plan to conduct audits related to specific aspects of supply chain management. As I indicated earlier, a priority for the department as it moves forward should be to track and assess the outcomes achieved through its initiatives and the progress made in resolving supply chain management problems in the three focus areas of asset visibility, requirements forecasting, and materiel distribution. We will also consider progress made in defense business transformation, business system modernization, and financial management because of the close linkage between these efforts and DOD’s success in improving its supply chain management. We look forward to working with the department to provide an accurate appraisal of progress toward the goal of successfully resolving problems that have hindered effective and efficient supply chain management. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact William Solis at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making contributions to this testimony include Tom Gosling, Assistant Director; Karyn Angulo; Larry Junek; and Marie Mak. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The availability of spare parts and other critical items provided through the Department of Defense's (DOD) supply chains affects the readiness and capabilities of U.S. military forces. Since 1990, GAO has designated DOD supply chain management as a high-risk area. In 2005, DOD developed a plan aimed at addressing supply chain problems and having GAO remove this high-risk designation. DOD's plan focuses on three areas: requirements forecasting, asset visibility, and materiel distribution. GAO was asked to provide its views on (1) DOD's progress in developing and implementing the initiatives in its plan, (2) the results of recent work relating to the three focus areas covered by the plan, and (3) the integration of supply chain management with efforts to improve defense business operations. GAO also addressed broader issues of logistics governance and strategic planning. This testimony is based on prior GAO reports and analysis. To determine whether to retain the high-risk designation for supply chain management, GAO considers factors such as whether DOD makes substantial progress implementing improvement initiatives; establishes a program to validate the effectiveness of the initiatives; and completes a comprehensive, integrated strategy. The most recent update to DOD's plan shows that DOD has made progress developing and implementing its supply chain management improvement initiatives. DOD is generally staying on track for implementing its initiatives, although there have been delays in meeting certain milestones. However, the long-term time frames for many of these initiatives present challenges to the department in sustaining progress toward substantially completing their implementation. The plan also lacks outcome-focused performance measures for many individual initiatives and the three focus areas, limiting DOD's ability to fully demonstrate the results achieved through its plan. Increasing DOD's focus on outcomes will enable stakeholders to track the interim and long-term success of its initiatives and help DOD determine if it is meeting its goals of more effective and efficient supply chain management. GAO's recent work has identified problems related to the three focus areas in DOD's plan. In the requirements area, the military services are experiencing difficulties estimating acquisition lead times to acquire spare parts for equipment and weapon systems, hindering their ability to efficiently and effectively maintain spare parts inventories for military equipment. Challenges in the asset visibility area include lack of interoperability among information technology systems, problems with container management, and inconsistent application of radio frequency identification technology, which make it difficult to obtain timely and accurate information on assets in theater. In the materiel distribution area, challenges remain in coordinating and consolidating distribution and supply support within a theater. Improving defense business operations is integral to resolving supply chain management problems. Progress in DOD's overall approach to business transformation is needed to confront problems in other high-risk areas, including supply chain management. Because of the complexity of business transformation, GAO has stated that DOD needs a Chief Management Officer with significant authority, experience, and a term that would provide sustained leadership and the time to integrate DOD's overall business transformation efforts. GAO's work, pending legislation, and other recent studies indicate a consensus that the status quo is no longer acceptable. GAO's recent review of joint theater logistics raises concerns about whether DOD can effectively implement this initiative without reexamining fundamental aspects of the department's logistics governance and strategy. In this respect, joint theater logistics may serve as a microcosm of some of the challenges DOD faces in resolving supply chain management problems. Moreover, GAO recommended in that report that DOD align its approach to joint theater logistics with ongoing actions the department is taking to reform its logistics governance and develop its logistics strategy. Several recent studies of DOD logistics systems have recommended changes to DOD's organizational structure for providing joint logistics and supply support to military operations.
The Government Performance and Results Act (GPRA) of 1993 requires federal agencies to establish measures to determine the results of their activities. Such measures are a prerequisite for making informed decisions on allocating scarce resources to areas likely to attain results that advance the agency’s mission and achieve its goals. One of IRS’s strategic goals is to ensure taxpayer compliance, but the agency lacks current measures of taxpayers’ voluntary compliance. Having such measures would give IRS an understanding of current compliance levels and help to identify steps that are likely to lead to improved compliance. There are three types of voluntary compliance measures: filing compliance, which measures the percent of taxpayers who file returns in a timely manner; payment compliance, which measures the percent of tax payments that are paid in a timely manner; and reporting compliance, which measures the percent of actual tax liability that is reported accurately on returns. Although IRS’s NRP plans include reviews of all three types of compliance, the majority of their efforts have been devoted to the development of reporting compliance measurement procedures. Reporting compliance is also the only aspect of NRP that will include audits of taxpayers. For many years, IRS has periodically used random audits of tax returns to measure the level of voluntary reporting compliance. However, IRS last measured voluntary reporting compliance over a decade ago when it did line-by-line audits of about 50,000 individual tax year 1988 tax returns. IRS planned to measure reporting compliance using 1994 returns in an ambitious effort involving over 150,000 randomly selected returns, including 92,000 individuals (including sole proprietorships and farmers) as well as corporations, partnerships, and S-corporations. Before beginning the audit process, however, IRS cancelled the study because of its cost and because of criticism from Congress, the media, tax community, and taxpayers about the size of the sample and the burden imposed by the audits on compliant taxpayers. IRS’s compliance research studies provided more than just a measure of voluntary compliance. They have also been used by IRS to identify compliance trends and to allow IRS to suggest changes in tax laws and regulations to improve voluntary reporting compliance. Voluntary reporting compliance study results have also been the basis for formulas to help IRS select returns for enforcement audits. IRS conducts hundreds of thousands of enforcement audits each year as part of its overall enforcement efforts. Unlike compliance study audits, enforcement audits are not random. IRS targets enforcement audits on likely noncompliant returns—those returns with a high probability that an audit would detect improperly reported tax liability. According to IRS, return selection formulas—first used to select 1968 tax returns for audit—have reduced the number of audits that resulted in no change to tax liability. No change enforcement audits are a burden on compliant taxpayers and use scarce IRS resources. In the year before the formulas were first used, the no change rate was 46 percent. For returns filed in 1994, the no change rate was about 19 percent. These formulas were last updated using 1988 tax return data, however, and as they become more out of date, the percentage of formula-identified enforcement audits resulting in no change has increased. IRS reports that 24 percent of enforcement audits of 1998 returns resulted in no change, and the agency projects that this rate will grow to 27 percent for returns filed in 2005. We have previously reported on the need for IRS to conduct new compliance research. In 1996, we recommended that IRS develop a cost- effective, long-term strategy to ensure the continued availability of reliable compliance data and reiterated the need for new compliance research studies in subsequent reports and testimonies. (See app. I for a list of reports we have issued relating to compliance measurement.) Others, including the IRS Commissioner and the IRS Oversight Board, recognize the need for IRS to measure voluntary compliance. The IRS Commissioner has said that measuring voluntary compliance is a critical part of IRS’s overall organizational transformation. In its fiscal year 2001 annual report, the IRS Oversight Board supports the National Research Program and requested congressional support for the program. To describe NRP, we have been in frequent contact with representatives of IRS’s NRP Office and other IRS officials as they have designed the program and planned for its implementation. We reviewed IRS’s draft prospectus and the detailed plans that act as a blueprint for the NRP processes and components. These documents describe the program’s objectives and the steps needed to implement the program. We also received briefings from the program’s designers and discussed the program with other IRS officials, including representatives of the Office of Research and the Large and Mid-Sized Business, Small Business/Self Employed, and Wage and Investment operating divisions. The NRP descriptions and other information in this report were current as of May 2002. We assessed NRP’s design and progress towards implementation in light of several criteria. We considered how well NRP addresses government guidance on performance measurement and data reliability. GPRA, for example, requires agencies to establish meaningful performance goals aligned with their mission and measure progress using sound, objective performance data. In addition, the Office of Management and Budget issued guidelines in February 2002 that require agencies to ensure that information they generate be objective and reliable. We also considered how well NRP meets general research design guidelines, such as GAO’s draft guidelines for ensuring the reliability of computer-based data. We also reviewed past taxpayer compliance research efforts and discussed the program with former IRS commissioners. Furthermore, we assessed how well the program meets the design principles IRS has defined, assessed whether NRP appears suited to accomplish IRS’s stated objectives for voluntary reporting compliance research, identified key steps in IRS’s NRP implementation plans and schedules, and assessed IRS progress towards meeting those milestones. As NRP was largely still under development throughout the time of our review and key documents were still in draft at the time this report was prepared, our assessment of NRP should be considered preliminary pending the program’s final design. At the time of our study, IRS was also having its sample design reviewed by two outside contractors. These reviews were not complete at the time we were preparing this report. In light of these reviews, we did not independently replicate calculations for the overall NRP sample or the sizes of individual strata. With respect to the NRP sample, we held detailed discussions with the IRS officials responsible for developing the sample design and discussed with them the rationale for their decisions regarding the sample. We also reviewed statistical analyses and other studies done by IRS to justify sample design decisions. We conducted our work between September 2001 and May 2002 in accordance with generally accepted government auditing standards. The purpose of NRP is to produce data that IRS can use to measure overall reporting compliance, update existing audit selection formulas, and identify potential ways to improve voluntary compliance. Under NRP, IRS will review randomly selected individual tax returns to determine whether the taxpayer has complied with statutory income, expense, and tax reporting requirements. The major components of the program include (1) a random sample of individual tax returns large enough to meet program objectives; (2) a specially trained cadre of examiners; (3) an assortment of casebuilding tools to verify as many items reported on tax returns as possible without contacting the taxpayer; (4) a tax return classification process for determining the level of audit, if any, a return warrants and which items must be verified; and (5) an examination process that uses structured procedures and managerial reviews. IRS has also developed a data analysis plan that describes how it plans to use the data to address each of the program objectives. According to IRS, the NRP sample is designed so that the results are representative of the population of individuals and self-employed taxpayers who filed tax year 2001 Form 1040 tax returns. The sample is intended to produce estimates of noncompliance and potential tax change as well as capture differences in reporting compliance levels between this study and subsequent ones. The NRP sample consists of a total of about 49,000 returns representing the population of about 129 million individual Form 1040 filers. These filers include sole proprietor business owners who file schedule C (Profit or Loss From Business – Sole Proprietorship) and farmers who file schedule F (Profit or Loss From Farming), as well as taxpayers whose income consists solely of wages and investment income. The NRP sample is stratified or divided by type of Form 1040 filer and by income level. The sample includes some substratification of higher-income taxpayers, which was done at the request of the IRS business operating divisions that will be key users of NRP data. The purpose of the stratification is to permit IRS to develop audit selection formulas and other information specific to different types of taxpayers. Returns in each strata will be weighted to make the sample representative of the overall population. See appendix II for a description of the NRP sample. IRS plans to select a cadre of examiners and other staff from its current employees and train them to implement the program. IRS estimates that it will have over 1,000 full-time equivalent staff working on the program during the peak examination phase, which is expected to be in fiscal year 2003. Most of the cadre will consist of revenue agents from the Small Business/Self Employed operating division, but the cadre will also include correspondence examiners from the Wage and Investment operating division. According to NRP’s plans, the cadre will undergo substantial training designed to ensure consistency and quality in NRP implementation. IRS plans state that the extensively trained cadre represents a sizeable investment in human capital and an attempt to develop lasting institutional knowledge for subsequent NRP studies. The NRP proposal includes using casebuilding tools to aid examiners in determining whether IRS needs to have any contact with taxpayers to verify the accuracy of information reported on their tax returns. The casebuilding tools consist of data from both IRS and third-party sources. IRS’s internal casebuilding tools include return information from the prior 3 years, audit history, payment and filing history, information return data reported by third parties (banks, lending institutions, and others), and bank reports on large cash transactions. Use of these data is intended to rule out compliance issues that can be verified without contacting taxpayers. Casebuilding tools also include data from two third-party sources. The first is ChoicePoint, an external public database containing real estate and other asset ownership information (e.g., motor vehicle registrations and ownership of luxury items like watercraft and aircraft). For NRP purposes, ChoicePoint is to enable examiners to confirm basic asset information. The other third-party data source is the Dependent Data Base, which is a combination of Department of Health and Human Services and the Social Security Administration data. These data are to provide custody information that can be used to help determine the validity of dependent and Earned Income Tax Credit (EITC) claims. According to IRS, although information from IRS and other databases will help NRP classifiers determine whether or not a return warrants an audit, the data will only be used to validate return items or identify potential problems and not to make changes to tax liability. IRS officials stressed that NRP-recommended changes to tax liability will always be based on information obtained by audit. IRS plans for NRP cadre members to use the data compiled during casebuilding to classify the returns in the sample according to the level of audit, if any, they should be given. IRS plans to train about 100 members of the NRP cadre to classify returns according to NRP guidelines. These specially trained staff are to classify returns as either accepted as filed (i.e., all line items are verified without any taxpayer contact) or as needing some form of audit. IRS’s preliminary estimates are that about 8,000 returns will be accepted without any taxpayer contact, most of them coming from nonbusiness income returns. For returns needing an audit, classifiers will also specify which line items need verification and determine whether the additional information needs to be obtained through correspondence or a face-to-face audit. For correspondence examinations, the returns are to contain no more than two simple tax issues, such as those dealing with filing status, exemptions and dependent claims, various tax credits besides the EITC (e.g., education credit, fuel tax credit) and alimony deductions. IRS’s preliminary estimates are that about 9,000 returns will be classified into the correspondence audit category. If a classifier determines that some line item information cannot be verified with casebuilding materials or simple correspondence, the line items will be noted on a check sheet along with a brief explanation and sent for a face-to-face audit. IRS will require taxpayers whose returns fall into this category to verify information on their returns in person with an examiner from the NRP cadre. IRS’s preliminary estimates are that about 30,000 returns, including most self-employed taxpayers and high-income individuals, will fall into the face-to-face audit category. IRS plans to have NRP managers review many classification decisions. A supervisory review team will review all “accept as filed” classification decisions throughout NRP. This team will also review a sample of classification decisions indicating the need for an audit. As part of the classification process, IRS plans to select about 1,683 returns—561 from each of the three classification categories—that would otherwise have been accepted as filed or sent for correspondence or face- to-face audits to undergo intensive line-by-line audits. IRS determined that, since it plans to accept return information as accurate without getting additional information from taxpayers, the study results might misstate compliance levels. Therefore IRS developed this subsample of classified returns to compare NRP study results with what might have been detected by comprehensive line-by-line audits. According to IRS, this is to provide it with some insights as to the accuracy of the casebuilding and classification processes, the bias (if any) introduced by the NRP approach, a basis for correcting any bias in the aggregate NRP measures, and indications of where future studies might be improved. IRS has termed this comparison a “calibration” of the study results. Figure 1 is a breakdown of the NRP sample by the level of IRS contact that taxpayers with returns in the NRP sample will experience. The number of returns in the face-to-face audit, correspondence audit, and no contact categories are IRS’s preliminary estimates and may change based on the results of upcoming tests of NRP processes. The actual number of returns in each of these categories will depend on the results of the NRP classification process. The number of returns to be selected for line-by-line audits, however, has been predetermined and, according to IRS, will not change. Returns that cannot be fully verified using casebuilding data will be sent for examination – either through correspondence or face-to-face audits. Correspondence audits will be done with relatively simple returns that have potential underreporting or unverifiable information in one or two areas. In a correspondence audit, IRS will request that the taxpayer send documentation verifying the line items in question. According to IRS, in many ways, NRP face-to-face audits (with the exception of those in the calibration sample) will resemble IRS’s enforcement audits. Examiners will determine whether the information reported on the return is accurate or adjustments need to be made. If they determine that a taxpayer’s tax liability is understated, the additional tax will be assessed. In other ways, however, NRP audits will differ from enforcement audits. For example, examiners use classification check sheets to guide their enforcement audits, but they are not limited to the line items flagged during classification. IRS has established examination guidelines for NRP examiners to follow that require justification to audit unclassified line items. IRS officials have said that these guidelines help ensure that research consistency and the promise of minimizing taxpayer burden are not compromised. IRS officials also say that NRP guidelines will require examiners to record all tax changes, regardless of amount, though taxpayers will not be asked to pay additional taxes uncovered by an NRP audit that fall below a predetermined limit. IRS’s plans state that the examination component of NRP will be subject to reviews both while the examinations are underway and after the examinations are complete. NRP managers will review ongoing examinations periodically. Also, all cases will be subject to IRS’s regular quality review steps for all examinations, as well as a special review conducted by NRP managers. IRS will use the Report Generation Software (RGS) system to capture NRP results. Included in the database of NRP results will be examiners’ determinations of the reasons for any noncompliance that they found. IRS examiners will use the RGS menu of 46 reason codes to categorize reasons for taxpayer noncompliance. NRP examiners will also prepare electronic workpapers that will be attached to each RGS case file to aid researchers using the NRP database. The RGS case files, including these workpapers, will be archived in a database. The NRP Office has drafted a plan to guide the usage of data gathered in NRP classification and examination. IRS specifies how NRP data will allow the agency to conduct the following four broad categories of analysis that IRS describes as both critical to the IRS mission and not possible using alternative data sources: Measure overall compliance (the voluntary reporting rate and the underreporting portion of the tax gap). Update existing audit selection and resource allocation systems and develop new ones. Estimate impact on compliance and revenue of legislative and administrative changes. Identify potential ways to improve voluntary compliance. The plan also outlines specific uses of NRP results by IRS’s Office of Research and by the business operating divisions. For example, the Office of Research will focus on the development of compliance measures for the taxpayer population as a whole while the operating divisions will use NRP data to look at compliance issues for their specific customers. These measures are essentially based on a comparison of misreported amounts, by line item, with what the amounts should have been. In another example, IRS expects the operating divisions to be able to use information about the characteristics of specific pockets of noncompliance, including the causes of the noncompliance, to identify solutions. For example, if a large number of problems appear to be due to ignorance of IRS rules, the affected operating division could consider developing a new taxpayer education program. The analysis plan also describes how the Small Business/Self-Employed division will use NRP data, for example, to identify differences in compliance in dissimilar geographical areas, by industry and by type of business organization. The Wage and Investment division plans to focus on other NRP data, such as the relationship between Earned Income Tax Credit returns that are filed on time and paid in full and those that are not. Based on our assessment of NRP in light of government guidance on performance measurement and data reliability, IRS appears likely to meet the objectives the agency has set for NRP. IRS has designed NRP to meet the agency’s need for up-to-date reporting compliance data, including overall compliance rate information, data to support updating audit selection formulas, and information on specific pockets of noncompliance. At the same time, IRS has included several features in NRP that will meet the important goal of minimizing NRP’s intrusiveness and the burden on taxpayers whose returns are in the NRP sample. IRS has made substantial progress towards the implementation of NRP, though important steps remain to be completed. IRS designed NRP to address its need for up-to-date reporting of compliance research data. The sample is of sufficient size to meet both IRS’s need for information about the Form 1040 filing population in general and specific information about particular types of filers. The data IRS plans to capture includes the sort of detailed information that the agency will need to determine overall compliance levels, update selection formulas, and identify specific compliance problems. IRS has developed appropriate quality assurance mechanisms for NRP data collection, and its data analysis plan appropriately shows how IRS is to use the data it collects. If implemented as planned, NRP should produce sound, objective, and reliable performance data in accord with government guidance on performance measurement and data reliability. We evaluated the NRP sampling plan and found that the design is appropriate to meet IRS’ goals of producing a measure of voluntary compliance and development of return selection formulas. The sample of about 49,000 returns is designed to be representative of the population of about 129 million Form 1040 returns. The sample is stratified by type of return and amount of income, and we found that the sample is reasonably designed for making compliance measures for the taxpayer population as a whole and for subgroups of taxpayers. Also, the sample is stratified to provide adequate data for developing return selection formulas for examinations of Small Business/Self Employed returns and high-income Wage and Investment returns. The amount of data IRS plans to capture should be sufficient to identify specific compliance issues. NRP plans include capturing information about discrepancies between amounts reported on returns and what those amounts should have been, regardless of the size of the difference. The plans also specify that examiners are to determine the reasons for any noncompliance that they find – an improvement over past compliance data that could prove very useful to users of NRP data. IRS also plans to ensure that all workpapers developed by NRP examiners be prepared in electronic format and included in the NRP database. This represents another improvement over past compliance research efforts and should provide a valuable source of information for researchers and other users of NRP data. IRS has also included quality assurance mechanisms in NRP to help ensure that the data collected is complete and accurate. For example, IRS will have supervisors review all “accept as filed” classification decisions and a sample of decisions to send returns for either correspondence or face-to- face audits, providing assurance that classification decisions will be made accurately and consistently. This corresponds with both IRS’s NRP goal of getting accurate data as well as general research design principles. NRP’s use of line-by-line audits to calibrate the program’s results also provides a useful check on the accuracy of the classification process. The NRP guidelines also include procedures to check the quality of examination decisions, including the use of the Examination Quality Management System, which IRS uses for enforcement audits, and additional quality review steps developed specifically for NRP. One characteristic of good research designs is that they have a detailed analysis plan ensuring that study objectives are met with appropriate data. The NRP data analysis plan meets this criterion because it describes IRS’s goals for this research and how it will make appropriate use of the data NRP generates. The plan describes how NRP data will enable IRS to meet each of the four objectives it has laid out — measure overall reporting compliance, update existing audit selection and resource allocation systems and develop new ones, estimate impact on compliance and revenue of legislative and administrative changes, and identify potential ways to improve voluntary compliance. The plan includes specific questions that the Office of Research and the business operating divisions will be able to answer using the data generated by NRP. For example, the plan describes how IRS will be able to use NRP data to identify potential causes of noncompliance, which can be used to develop programs to reduce the incidence of noncompliance. Such research into the causes of noncompliance should be enhanced by NRP’s inclusion of reason codes and electronic workpapers in the NRP database. IRS has included important features in NRP that will minimize the time, expense, and overall intrusiveness associated with taxpayer contacts under the program. IRS has set minimizing taxpayer burden as one of the guiding principles of NRP, and its plans show that it has met that goal through (1) the use of IRS and third-party information to minimize the amount of information requested of taxpayers, (2) the development of classification guidelines to ensure that taxpayer contacts are limited to auditing only those line items on returns that cannot be verified without an audit, and (3) the development of a specially trained cadre of examiners to carry out NRP audits. By using information it has in its own files plus data from third-party sources, IRS has taken a substantial step towards minimizing the intrusiveness of the audits that will take place under NRP. By reviewing as much information as is available without taxpayer contact, IRS will eliminate the need for some taxpayers to be contacted at all, while others only will be asked to verify information that cannot be verified any other way. To ensure that casebuilding tools are used properly, IRS has drafted training materials and Internal Revenue Manual sections that describe the various casebuilding tools and their uses. We reviewed these documents and found that they meet the overall NRP design concept and goals set by IRS to minimize taxpayer burden. The decision not to audit every line on every return but, instead, classify the returns in the NRP sample to determine what needs to be verified also represents a substantial change from past compliance research efforts and an important step towards IRS’s goal of minimizing the impact on taxpayers in the NRP sample. Our assessment of IRS’s quality review procedures, draft training material, and draft Internal Revenue Manual sections on the processes and procedures to be used in classifying returns showed that, if properly implemented, they should aid classifiers in correctly determining which of the three-taxpayer contact categories returns should be placed. These decisions are vital to minimizing taxpayer burden. The NRP cadre is another important element of IRS’s ability to minimize the burden associated with taxpayer contacts under NRP. IRS has defined what it wants from a cadre of examiners – specifically that these staff be experienced, well-trained, and supplied with appropriate information, tools, and management support. The NRP cadre training materials and plans that we reviewed are sufficiently detailed to meet the goals IRS has spelled out for NRP, in particular that the cadre be prepared to keep the intrusiveness of NRP audits to a minimum and otherwise make the necessary distinctions between regular enforcement audit procedures and procedures specific to NRP. IRS’s progress so far indicates that the agency is on track to complete its development of NRP and begin auditing taxpayers under the program in late 2002, though not necessarily by the currently scheduled date of October 1, 2002. However, important work remains for IRS to complete before the agency can be confident that NRP will successfully generate the reporting compliance data it needs. Specifically, IRS needs to select and train a cadre of examiners to conduct NRP, complete testing NRP classification processes before they are implemented, and test the suitability of existing IRS information systems to capture and store NRP data. IRS met its goal to finalize its sample design by early February 2002. The agency needed its sample defined by that date in order to make sure that returns in the NRP sample were not selected for enforcement audits or for other research efforts, and to be able to separate returns after they were processed normally but before they were shipped off to storage. IRS met this deadline and is currently identifying and setting aside returns for NRP. The remaining work for IRS related to the NRP sample is for the agency to continue identifying and retrieving the returns selected for NRP. IRS has substantially completed the tasks associated with developing guidelines and training materials for casebuilding, classification, and examination of returns in the NRP sample. We assessed these guidelines and training materials and found that they meet the overall NRP design concept and goals set by IRS and general research design and data quality standards. According to IRS, plans for how it will identify the cadre of examiners to conduct NRP are complete but have not been implemented. However, important deadlines requiring a trained cadre are quickly approaching. The current NRP plan indicates that, as of May 2002, selecting the cadre should already be complete and cadre training should already be underway, but IRS has not met these milestones. Having the right people to perform NRP classification and examination tasks is very important. IRS’s plans for NRP and general research program design principles point to the importance of consistency and accuracy in all aspects of a research program like NRP, and the experience and commitment of the NRP cadre is an important part of ensuring this consistency and accuracy. Taxpayer contacts will also be of critical importance to how this program is perceived by the public and, again, having the right people in the examiner cadre will have much to do with the public’s acceptance of NRP. IRS now plans to begin training the NRP cadre in August 2002. This may impact IRS’s ability to begin classifying returns in August 2002. This may also mean that IRS may not meet the currently scheduled date of October 1, 2002, to begin taxpayer contacts. IRS officials pointed out both that this date may not be met, but also that the overall goals of the program will not suffer if taxpayer contacts start later in the year. The officials noted that it is more important to make sure that the design, testing, and implementation of NRP be complete before taxpayers are contacted than it is that IRS meet a self-imposed deadline. IRS recognized that it needs to test its NRP procedures, particularly as they relate to casebuilding and classification, and to modify procedures based on what those tests show. IRS has completed two tests of NRP casebuilding and classification procedures, but has extended the date of an important pre-implementation test to a date close to the start of the actual implementation of these steps. IRS has conducted two tests of NRP casebuilding and classification procedures. In October 2001, IRS conducted a preliminary classification process test to aid in the development of classification guidelines, including the use of casebuilding tools. The test consisted of two phases— one making classification decisions without the use of casebuilding tools and the other using casebuilding tools. The test results indicated that casebuilding tools were useful in detecting misreported income, but that specific guidelines were needed for using the tools and for making classification decisions regardless of the amount of misreporting found. A second test, conducted in early May 2002, was done on 30 previously audited returns that were part of a tax year 1999 EITC compliance study. This test was to assess draft classification guidelines and to test for consistency among classifiers. The test resulted in changes to training materials and classification guidelines. IRS plans to do additional pre-implementation evaluations of the NRP casebuilding and classification process, but only a very limited evaluation of this process using actual audited returns. The agency plans to conduct an additional test in July 2002 using unaudited returns as was done in the October 2001 test, as well as some previously audited EITC returns. The July 2002 test will involve actual NRP processes in order to provide IRS with an opportunity to make final adjustments to those processes prior to NRP implementation. One of the purposes of this test is to determine whether the classification guidelines will result in consistent classification decisions among classifiers. To test the guidelines, more than one classifier will classify each return and the results will be compared. IRS expects that this test will allow it to make changes to the guidelines so that more consistent classification decisions are made. Since most of the returns used in the test will have not been previously audited, the results give IRS little assurance that, even collectively, the classification decisions are correct but only that the decisions were consistent. Testing the NRP classification process using previously audited returns would allow a comparison of classification results with actual audit findings. As in the May 2002 test, such a test using a small number of recently audited returns would give IRS more assurance that NRP classification will result in correct decisions. Another part of the July 2002 test will entail classifying a larger group of the previously audited EITC compliance study returns that were used in the May test. The primary purpose of this test will be as a pre-test of an analysis planned for later in the year when final NRP processes are applied to a sample of previously audited EITC returns. That analysis will provide a basis for a decision whether NRP will be a suitable substitute for separate EITC compliance research studies and, if so, what adjustments need to be made to NRP results to make them comparable to the EITC compliance studies. The July 2002 pre-test will also provide a check on classification accuracy for this particular type of return, but these only represent a small portion of the NRP sample. Until IRS completes its July classification test, it cannot update its classification guidelines and procedures or train the staff who are to classify the returns. Depending on the results of the test and the number and types of changes that may have to be made to the classification procedures, IRS may not be in a position to begin classifying returns in August, as planned. IRS plans to use its existing information systems to capture and store the data generated by NRP. Modifications to these systems—RGS and the Examination Operational Automated Database—are planned in order to support NRP. IRS plans to evaluate the suitability of these systems for these purposes, but has not yet completed these evaluations. Identifying information system issues early enough to deal with them before NRP is implemented will permit the rest of NRP’s processes to run more smoothly than if problems are encountered later. IRS management told us that they have made implementing any information system changes needed to support NRP a top priority for the agency. IRS needs accurate and up-to-date information on taxpayers’ compliance with the tax laws in order to help it understand the effectiveness of its programs to promote and enforce compliance and target its enforcement audits on noncompliant returns. The design of NRP addresses this need while limiting the burden imposed on taxpayers selected for NRP reviews. IRS has been taking steps to implement NRP and, while it may not meet the October 1, 2002, date in its current schedule, it is currently on track to begin contacting taxpayers, when necessary, in late 2002. Because of the risk of increasing taxpayer burden, we agree with IRS that it is more important that design, testing, and implementation of NRP be complete before taxpayers are contacted, than it is that the agency meet a self- imposed deadline. While IRS has shown flexibility about the date to begin taxpayer contacts, the interim milestone for beginning to classify returns remains fixed at August 2002. Adhering to this milestone may pose problems because NRP plans call for several sequential steps to be completed before classification begins. Although IRS’s plans call for completing these steps, there may not be sufficient time to complete them, in their proper sequence, before August 2002. Specifically, testing and modifying casebuilding and classification procedures must be finished before NRP examiners can be trained in the finalized procedures. Such tests will be most useful if they include the classification of some recently audited returns. In addition, IRS must select and then train the cadre of NRP examiners. While completing these steps before classification begins will not guarantee successful NRP implementation, it would provide added assurance that implementation will proceed smoothly. We recommend that the Commissioner of Internal Revenue: Ensure that testing and modification of NRP casebuilding and classification procedures is complete before IRS begins cadre training, classifying NRP returns, or making any taxpayer contacts. IRS should use some previously audited, non-EITC tax returns to evaluate NRP classification procedures and classifier training; and Implement plans to select and appropriately train the cadre of examiners and other staff before NRP classification begins. On June 18, 2002, we received written comments on a draft of this report from the Commissioner of Internal Revenue (see app. III). The commissioner agreed with our recommendations as they concern the completion of NRP process testing, cadre selection, and cadre training prior to the start NRP casebuilding and classification. He expressed IRS’s commitment to not compromising the quality of the program in order to meet the agency’s scheduled date for commencement of taxpayer contacts under NRP. The commissioner also noted that draft reports from the two contractors evaluating the NRP sample indicate that the sample is valid for IRS’s goals of precision and workload selection development. The commissioner expressed concern with our recommendation that IRS use previously audited, non-EITC returns in the final stages of NRP casebuilding and classification testing. Specifically, he noted that IRS would not be able to produce all of the casebuilding data that relate to previously audited, non-EITC returns, and that IRS would not have a clear understanding of why the non-EITC returns were selected for audit and what guidance the examiners were given in conducting the audits. We agree that these are issues that a full-scale evaluation of NRP using previously audited returns would have to address. In subsequent discussions with IRS’s Director of Research, Analysis and Statistics, the senior IRS official responsible for NRP, we pointed out that our recommendation was not for IRS to conduct a full-scale evaluation of NRP, but for a smaller review of some previously audit returns. As we note in the report, a small test, such as the one IRS conducted in May 2002 using 30 previously audited EITC returns, would give IRS more assurance that NRP classification will result in correct decisions. The Director of Research, Analysis and Statistics agreed that such a test using non-EITC returns would be feasible and useful and that IRS will conduct the test before beginning taxpayer contacts. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report is also available at no charge on the GAO web site at http://www.gao.gov. If you or any of your staff have any questions, please contact Ralph Block at (415) 904-2150 or me at (202) 512-9110. Key contributors to this assignment were Wendy Ahmed, Jeffrey Baldwin-Bott, and David Lewis. Tax Compliance: Status of the Tax Year 1994 Compliance Measurement Program. GAO/GGD-95-39. (Washington, D.C.: December 30, 1994) Tax Compliance: 1994 Taxpayer Compliance Measurement Program. GAO/T-GGD-95-207. (Washington, D.C.: July 18, 1995) Letter to the Commissioner on TCMP Errors. GAO/GGD-95- 199R. (Washington, D.C.: July 19, 1995) Tax Administration: Information on IRS’ Taxpayer Compliance Measurement Program. GAO/GGD-96-21. (Washington, D.C.: October. 6, 1995) Tax Administration: Alternative Strategies to Obtain Compliance Data. GAO/GGD-96-89. Apr. 26, 1996. (Washington, D.C.: April 26, 1996) Tax Administration: Status of IRS Efforts to Develop Measures of Voluntary Compliance. GAO-01-535. (Washington, D.C.: June 18, 2001) Summary Summarizes uses of Taxpayer Compliance Measurement Program (TCMP) data and outlines who uses the data. Identifies weaknesses of proposed changes and establishes criteria for evaluating proposed changes to measures of voluntary compliance. Summarizes IRS’s plans for the 1994 TCMP and discusses promising changes. Identifies several weaknesses in the plan that the Internal Revenue Service (IRS) needs to fix before implementing the project. Testimony on 1994 TCMP before the House Subcommittee on Oversight, Committee on Ways and Means. Discusses uses of TCMP data and status of planned 1994 TCMP effort. Discusses some of the criticisms of TCMP. Identifies GAO reports where TCMP data were used. Summarizes errors in audits for 1988 TCMP and suggests changes to codes to be used to categorize the cause of noncompliance for the planned 1994 TCMP project. Follow-up on issues raised in our December 1994 report concerning timeliness and the types of data IRS planned to gather for TCMP audits. Also, briefly discusses other sources of data on voluntary compliance and the relevance of TCMP data for alternative tax system proposals. Indicates how IRS responded to our recommendations. Summarizes the problems caused by cancellation of the 1994 TCMP project. This report also identifies sampling strategies that will reduce the sample size and still provide some data. Describes IRS’s efforts to develop new voluntary compliance measures. Also discusses how federal agencies besides IRS assess compliance with the rules and regulations governing their programs. The Internal Revenue Service (IRS) designed a sample of 49,251 returns from the population of Form 1040 tax returns filed for tax year 2001. These include returns from wage earners and from self-employed individuals filing Schedule C and farmers filing Schedule F. IRS developed a representative sample, then added returns to increase the precision of NRP results. Included with these additional returns are about 18,000 added to increase the likelihood that there would be enough returns to provide a basis for developing new audit selection formulas. The NRP sample designers used past reporting compliance research results to derive an estimate of the percentage of returns that will likely need to be audited in each of the strata. The designers then used those estimates to add returns to the sample, intending to have at least 500 sufficiently high tax change returns in each grouping of strata that will likely require the development of a unique audit selection formula. Officials explained that the 500-return standard was used in past reporting compliance studies to develop audit selection formulas. The designers also said that they considered it important to apply the same standard in NRP. The NRP sample is detailed in table 1.
The U.S. tax system is based on taxpayers voluntarily complying with the tax laws. However, the Internal Revenue Service (IRS) last measured taxpayers' rate of compliance using 1988 tax returns. As time has passed, IRS has become concerned that its ability to understand the effectiveness of its programs and target audits on noncompliant returns has deteriorated, potentially resulting in poorer service to taxpayers, reduced confidence in the fairness of the tax system, and unnecessary audits of compliant taxpayers. IRS is now planning a new compliance study called the National Research Program (NRP). NRP is designed to review 49,000 individual tax returns randomly selected from the population of over 129 million. According to the NRP plan, IRS will review each sampled return to determine whether the taxpayer has complied with statutory income, expense, and tax reporting requirements. Unlike past compliance studies, not all of the reviews will include contacting taxpayers. Based on GAO's assessment of the NRP in light of government guidance on performance measurement and data reliability, research design guidelines, and IRS's goals for the program, NRP's design is likely to yield the sort of detailed information that IRS needs to measure overall compliance, develop formulas to select likely noncompliant returns for audit, and identify compliance problems to address.
Food and beverages have been served onboard Amtrak trains since Amtrak was created. Amtrak’s eleven commissaries are located around the country and are responsible for receiving, warehousing and stocking food, beverages, and other items for Amtrak’s onboard dining and café service. Until January 1999, Amtrak ran these commissaries with its own employees. Since then, Amtrak has contracted out the responsibility for the commissaries and for ordering and stocking all food, beverages, and related items under a contract that expires in September 2006. Gate Gourmet (the contractor), is also a supplier of food and beverages to several major airlines. During fiscal years 2002 through 2004, the 3-year period we focused on in our audit work, Amtrak paid Gate Gourmet between $59 and $64 million a year in reimbursements and fees. Gate Gourmet personnel operate Amtrak-owned commissaries and order, receive, store, and stock trains with food, beverages, and other related items such as table linens and napkins. Food and beverage stock are charged to Amtrak employees who account for the food en route. When a train arrives at its final destination, all remaining stock items are returned to a commissary. Gate Gourmet charges Amtrak for the items used, as well as for labor, management, and other fees. The contract requires that Gate Gourmet provide Amtrak an independently audited annual report within 120 days following the expiration of each contract year. Amtrak’s model for handling its food and beverage service is similar to other passenger transportation companies, with some important differences. Northwest Airlines has outsourced their kitchen and commissary operations and have food and beverages delivered to each airplane before each flight. VIA Rail Canada, Canada’s national passenger railroad, serves food on most of its trains and owns and operates its own commissaries. Food and other items are delivered to each train, consumed during the train’s run and restocked at the destination. The Alaska Railroad, however, has a private contractor that orders, stocks, delivers, prepares, and serves all of its food and beverages on its trains using their own labor force. With certain exceptions and limits, all food and beverage revenues and expenses are the responsibility of the contractor. Amtrak’s financial records show that for every dollar Amtrak earns in food and beverage revenue, it spends about $2—a pattern that has held consistent for all 3 years we reviewed. (See table 1 and fig. 2.) Amtrak’s financial records also indicate that Amtrak has lost a total of almost $245 million for fiscal year 2002 through fiscal year 2004 on food and beverage service. Section 24305(c)(4) of Title 49, United States Code, states that Amtrak is not to operate a food and beverage service whose revenues do not exceed the cost of providing such service. About half of the total food and beverage expenditure is labor cost for Amtrak staff who prepare and serve the food aboard the trains. About 38 percent is reimbursements and fees to Gate Gourmet, representing the cost of food and other products in addition to other fees paid to Gate Gourmet. About 9 percent is for other Amtrak costs. While Amtrak’s labor costs for its food and beverage service are significant, these costs are part of Amtrak’s overall labor cost structure, and as such, are beyond the scope of work we did for this testimony. However, a recent Amtrak Inspector General report suggested that Amtrak could save money on its food and beverage labor if the cost of this labor was similar to that of the restaurant industry. Amtrak has responded to these continued losses with some incremental reductions in food and beverage service. On July 1, 2005, Amtrak plans to discontinue food and beverage service on its routes between New York City and Albany, New York, which would allow Amtrak to close its commissary in Albany. An official in Amtrak’s Office of Inspector General stated that Amtrak lost between $6 to $8 per person on food service on those routes and that closing the commissary will save Amtrak about $1 million per year. However, achieving additional savings by closing commissaries could be limited, as Amtrak’s other commissaries serve multiple Amtrak trains that would continue to offer food and beverage service. In other words, closing a commissary could affect multiple trains on multiple routes. According to an Amtrak procurement official, a team consisting of members of Amtrak’s procurement, legal, financial and transportation departments is currently working to identify ways to reduce Amtrak’s costs in its next commissary contract. Other transportation companies have taken actions to better control their food and beverage costs in recent years. For example, Northwest Airlines officials stated that they pay particular attention to food and beverage expenses. Since 2002, Northwest has reduced its food costs by 4 percent. This has been achieved by reducing or eliminating complimentary food service for coach passengers on domestic flights (even to the point of eliminating pretzels on these flights), aggressive pricing of food products and flexible budgeting that adjusts each month to reflect increases or decreases in ridership. VIA Rail officials told us they have considerable flexibility in hiring its onboard service personnel to adjust its labor force to respond to peak and off-peak tourist seasons for its long-distance trains. In addition, VIA Rail officials said they have considerable flexibility in how onboard service staff are used; in essence, all onboard service staff can be used wherever and whenever needed. The Alaska Railroad restructured the contract with its food and beverage service provider to allow for food price fluctuation within defined limits. One way to control costs is to build provisions into a contract that motivate a contractor to keep costs as low as possible. Amtrak’s current cost reimbursable contract with Gate Gourmet creates, if anything, an incentive to increase Amtrak’s costs unless properly monitored. Under the contract, Gate Gourmet receives a number of reimbursements, including commissary, labor, and insurance costs, in addition to an operating fee. The operating fee is defined in the contract as 5 percent of the total actual cost of the onboard food and beverage items. This fee is an incentive for the contractor to increase Amtrak’s food and beverage costs. These costs can change in each yearly operating budget. This operating budget is subject to review by Amtrak and is mutually agreed to by both Amtrak and Gate Gourmet. Incentives can also be written into a cost reimbursable contract to control costs and enhance performance. Although the contract included a discussion of performance standards, these standards and related measures were never created, even though they were required 45 days after the contract was signed in January 1999. Performance standards would have allowed for performance incentives and penalties. If these incentives had been developed, then they could have been used to pay Gate Gourmet based on such things as finding lower-priced food products of similar quality to what is being purchased now, or identifying ways the food and beverage service could be operated more economically or efficiently. Other factors may not provide the needed incentives for Gate Gourmet to aggressively seek to reduce Amtrak’s food costs. Under current contract provisions, Gate Gourmet can charge Amtrak for food prepared in Gate Gourmet facilities and delivered to Amtrak’s commissaries. The contract provides considerable pricing flexibility to Gate Gourmet for these items with no detailed definitions or price caps. This makes it difficult to determine whether or not Amtrak is being charged a reasonable price. In addition, the contract also provides that Gate Gourmet deduct any trade or quantity discounts on items purchased for Amtrak either immediately from Amtrak’s invoices or retroactively based on the proportion of Amtrak’s purchases. Discounts applied retroactively are to be applied by Gate Gourmet in “good faith” and retroactive payments are “an approximation and that cannot guarantee exactness.” The contract stipulates these payments are subject to an audit by Amtrak. However, these audits have never been conducted. In contrast, while Northwest Airlines has cost plus contracts with its largest food and beverage contractors (including Gate Gourmet), Northwest’s management of them is different. Northwest’s caterer contracts have labor and other rates specified in the contract. According to Northwest’s food and beverage officials, they know quickly if they change their menu, how much their suppliers will charge them—even to the addition or subtraction of a leaf of lettuce served as part of an entree. In addition, Northwest officials stated that each price charged by its contractors is checked and invoices are audited. We identified five types of management controls that Amtrak did not fully exercise regarding oversight of its food and beverage service. These include the following: Requirement for an annual report has never been enforced. Amtrak’s contract requires Gate Gourmet to provide an independently audited annual report within 120 days following the expiration of each contract year; this report must also be certified by Gate Gourmet officials. This report is to provide actual and budgeted amounts for key line items and to provide a narrative explanation for any actual to budget variance greater than one percent in the aggregate for all commissaries. However, Gate Gourmet has not provided this report during the five completed years the contract has been in place. Amtrak food and beverage officials could not provide us with a reason as to why they had decided not to enforce this provision. They told us that they relied on contractor-provided monthly operating statements and on reports from Amtrak’s Inspector General instead. Our review found that the monthly operating statements lacked critical information that was to be included in the annual report, were prepared by the party seeking reimbursement, and, perhaps more importantly, were not independently reviewed or audited. By contrast, the annual report was to be certified by contractor officials and audited by an independent certified public accountant. The Inspector General’s reports, while providing management with information on some aspects of Amtrak’s food and beverage service activities, should not be viewed as a substitute for a comprehensive audit and report. Audits of discounts and rebates were not conducted. The contract provides that Amtrak audit Gate Gourmet’s allocations of trade and quantity discounts received from purchases of food and beverages. However, Amtrak has never conducted an audit of the discounts credited to it, nor has it requested that the contractor certify that all of the discounts that Amtrak should receive have been credited to its account. Information we reviewed indicates that such audits may yield savings for Amtrak. For example, Amtrak officials advised us that discounts and rebates totaling over $550,000 for fiscal years 2002 and 2003 had been credited on gross purchases of about $6.5 million. However, total Gate Gourmet purchases exceeded $90 million for the 2-year period—roughly 13 times the amount of purchases the contractor reported as being subject to discounts and rebates. Because Amtrak did not require an independent audit or otherwise analyze the trade and quantity discounts received, Amtrak does not know whether or not it received all of the discounts and rebates to which it was entitled. Amtrak could not provide us with reasons supporting its decision or its consideration of this issue. Adequate monitoring of purchase price information needs improvements. Amtrak did not adequately monitor its purchase price information for food and beverage items purchased by Gate Gourmet. Amtrak officials said they monitored contractor purchases using daily price reports that listed unit prices for purchases ordered the previous day and the price the last time the item was ordered. However, given the importance of purchase orders in a food and beverage operation, internal controls need to be developed to systematically monitor and analyze purchase information. These controls should then be monitored on a regular basis to assess the quality of performance over time. For example, controls should include processes to identify unit price variances over established or pre-set amounts and actions taken to document follow- up work performed. Although Amtrak had some processes that compare prices, the process was not robust enough to include a record of price trends or follow up actions taken such as corrections of amounts billed. Our testing of this control showed that if Amtrak had approached this review in a more rigorous manner, it may have identified discrepancies warranting further investigation. For example: Monitoring of Purchase Order Pricing: Using data mining and other audit techniques, we selectively reviewed more than $80 million of purchase order information for fiscal years 2002 and 2003 and found that the contractor was generating purchase orders with significant variances in unit prices. For example, in 2003, the purchase order price of a 10-ounce strip steak ranged from $3.02 to $7.58. Monitoring of Actual Product Price Charged by Gate Gourmet: When Amtrak officials told us that purchase order information did not always reflect actual amounts paid, we tested actual prices paid by Amtrak to Gate Gourmet. To test purchase order data, we nonstatistically selected 37 payment transactions and reviewed the underlying supporting documentation and found evidence of widely variable product prices. For instance, in fiscal years 2002 and 2003, payments of over $400,000 for 12-ounce Heineken beer varied from $0.43 to $3.93 per bottle. Amtrak product pricing excludes labor costs. Our work revealed that Amtrak’s product price to the customer does not take into account over half of Amtrak’s total food and beverage costs. Amtrak’s target profit margin is 67 percent for prepared meals and 81 percent for controlled beverages. These target profit margins are expressed as a percentage of sales over the item product cost charged to Amtrak. However, these target profit margins do not take into account Amtrak’s on-board labor costs, which our work has determined is estimated at over half of Amtrak’s food and beverage total expenditures. Amtrak’s current food and beverage product pricing seems to ensure that its food and beverage service will not be profitable. Available procurement expertise not brought to bear. Finally, Amtrak’s procurement department was not involved in the negotiation of the original contract. The current contract was signed by officials of Amtrak’s now defunct Northeast Corridor Strategic Business Unit. The contract’s initial period was for about 7 years (January 29, 1999, to September 30, 2006), with a 5-year extension option. In addition, another agreement to supply Amtrak’s Acela train service for food and beverage items from Gate Gourmet’s flight kitchens was made verbally between Amtrak’s former president and the president of Gate Gourmet. Amtrak does not have any documentation for the contract terms for this service. In contrast to Amtrak, other transportation companies we interviewed closely monitor their invoices and contractor payments through periodic audits or have given the responsibility for costs and pricing to the contractor. For example, Northwest Airlines officials stated that they conduct regular audits of “every price” they are charged from their contractors and have found errors in either prices or labor charges in their contractor invoices. VIA Rail selectively audits their food supplier invoices that are attached to every billing statement they receive. Finally, the Alaska Railroad food and beverage business model gives responsibility for food and labor costs to the contractor, subject to contractual limits. Finally, information that would provide accountability over this service, both internally and externally, is limited. We noted that while Amtrak reports the combined revenue from its food and beverage services in its monthly performance reports, it does not identify for stakeholders the revenue attributable to each service. Amtrak also does not include any information about its food and beverage expenses in any of its internal or external reports, including its monthly performance reports, its internal quarterly progress reports, or its annual consolidated financial statements. Absent this information, it is difficult for internal and external stakeholders to determine the amount of expense attributable to the food and beverage service and to gauge the profit or loss of the operation. This hinders oversight and accountability. Other transportation companies we studied have a different accountability structure for their food and beverage service. Because VIA Rail has a fixed subsidy from the federal Canadian government, VIA Rail’s management has an inherent incentive to control its costs in all areas of its operation, including its food and beverage service. VIA Rail controls its food and beverage costs in many different ways including fixed fee supplier contracts, item price reports, monitoring of supplier markups and item prices, and fixed food cost budgets to VIA Rail menu planners. Northwest Airlines has a flexible monthly food and beverage budget that increases or decreases with ridership levels. In addition, each supplier contract has established markups on product prices and its contracts with food preparation and delivery providers have detailed labor rates that are all audited for accuracy. The Alaska Railroad receives biweekly reports from its contractor detailing its labor and food costs that show, among other things, contractor performance against the contractual cost caps. In addition, the contractor and the Alaska Railroad will conduct annual audits of its contractor’s performance under the contract. Amtrak’s food and beverage service may represent a relatively small part of the company’s operating budget, but it speaks volumes about Amtrak’s need to get its operations in better order. In administering this contract, basic steps for good management have been ignored or otherwise set aside. Omissions include not completing agreed-upon provisions of the contract, not carrying through with basic oversight called for in the contract, and ensuring that the organization was getting products at the most reasonable price. Prudence requires a stronger effort, beginning with carrying out those steps that, under the contract, should have been taken all along. Amtrak needs to take such steps not only to curb the losses in this program, but to help convince the public that it is acting as a careful steward of the federal dollars that continue to keep it operating. Based on our work to date, we anticipate making recommendations to Amtrak to improve controls over its food and beverage operations. Since we did not have sufficient time to obtain Amtrak’s comments, as required by government auditing standards prior to this hearing, the recommendations remain tentative until that process is complete. At that time, we anticipate making the following recommendations that Amtrak: 1. Better contain its food and beverage costs through: Following its own procedures for ensuring proper contracts and Enforcing key provisions of the current Gate Gourmet contract including annual reports that are independently audited by an outside auditing firm and certified by Gate Gourmet officials and conduct regular audits of discount and rebates. 2. Prepare a written contract for food and beverage service on Acela trains that specifies the service to be provided, includes incentives to ensure efficient and effective contractor performance, and includes regular annual reports and audits. 3. Create separate revenue and expenditure reporting and other basic food service metrics to allow for internal and external accountability for its food and beverage service and create incentives to reduce costs and/or increase revenue. 4. Comprehensively review the revenue and cost structure of its food and beverage service to determine the most cost effective solution that can increase the financial contribution of its food and beverage function. Mr. Chairman, this concludes my testimony. I would be happy to answer whatever questions you or the other members might have. For further information, please contact JayEtta Z. Hecker at heckerj@gao.gov or at 202-512-2834. Individuals making key contributions to this statement include Greg Hanna, Heather Krause, Bert Japikse, Richard Jorgenson, Steven Martin, Robert Martin, Irvin McMasters, Robert Owens, and Randy Williamson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Amtrak has provided food and beverage service on its trains since it began operations in 1971. Amtrak has struggled since its inception to earn sufficient revenues and depends heavily on federal subsidies to remain solvent. While a small part of Amtrak's overall expenditures, Amtrak's food and beverage service illustrates concerns in Amtrak's overall cost containment, management and accountability issues. This testimony is based on GAO's work on Amtrak's management and performance as well as additional information gained from Amtrak and other transportation providers. This testimony focuses on (1) the provisions written into Amtrak's contract with Gate Gourmet to control costs, (2) the types of management controls Amtrak exercises to prevent overpayments, and (3) the information Amtrak collects and uses to monitor the service and to report to stakeholders such as its Board of Directors. Amtrak's financial records show that for every dollar Amtrak earns in food and beverage revenue, it spends about $2--a pattern that has held consistent for all 3 years GAO reviewed. In GAO's estimation, Amtrak has lost a total of almost $245 million from fiscal year 2002 through fiscal year 2004 on food and beverage service. Since 1999, Amtrak has contracted out the responsibility to Gate Gourmet International (Gate Gourmet) for managing commissaries and for ordering and stocking all food and beverages and related items managing under a contract that expires in September 2006. Amtrak's current cost reimbursable contract with Gate Gourmet creates, if anything, an incentive to increase Amtrak's costs unless properly monitored. Gate Gourmet can charge Amtrak for the cost of the food and beverage items, as well as management, labor, and other expenses. Without defined controls and management, this type of contract structure provides little incentive for a contractor to reduce or contain costs to provide better value to its customer. GAO found five different management controls that Amtrak did not fully exercise regarding oversight of its food and beverage service. These controls include: (1) requiring an independently audited financial report, (2) auditing for all applicable rebates and discounts that Gate Gourmet could have applied to food and beverage items purchased for Amtrak, (3) adequately monitoring purchase price information for its food and beverage items, (4) not considering Amtrak's food and beverage labor costs, as a part of product markups, and that (5) not utilizing Amtrak's procurement department in negotiating the current contract. Information that could provide both internal and external accountability for the food and beverage function is limited. Amtrak does not include any information about its food and beverage expenses in any of its internal or external reports, including its monthly performance reports, its internal quarterly progress reports or its annual consolidated financial statements. This lack of information makes it difficult for internal and external stakeholders to gauge the profit or loss of the operation as well as to assign accountability.
In our June 2016 report, we found that although the Coast Guard had assessed its Arctic capabilities and worked with its Arctic partners—such as other federal agencies— to carry out actions to help mitigate Arctic capability gaps—it had not systematically assessed how its actions have helped to mitigate these gaps. Specifically, we reported that the Coast Guard had assessed its capability to conduct its Arctic missions and had identified various capability gaps, primarily through two key studies. The capability gaps identified in these reports—which Coast Guard officials confirmed remain relevant and highlighted in their Arctic strategy—include (1) communications, (2) Arctic maritime domain awareness, (3) infrastructure, (4) training and exercise opportunities, and (5) icebreaking. These gaps are similar to the ones we identified in 2010. According to Coast Guard officials, through the agency’s role in implementing the various Arctic strategies and implementation plans, the Coast Guard has taken actions, along with its Arctic partners, that have helped to mitigate capability gaps. For example, the Coast Guard is the lead agency for implementing the strategies’ tasks related to enhancing Arctic maritime domain awareness. In addition, Coast Guard officials reported that they utilize the annual Arctic Shield operations as the primary operational method to better understand the agency’s Arctic capabilities and associated gaps and to take actions to help mitigate them. For example, during Arctic Shield operations, the Coast Guard tested communications equipment belonging to the Department of Defense—extending communications capabilities further north than previously possible—and conducted Arctic oil spill response exercises. However, we found in our June 2016 report that the Coast Guard had not systematically tracked the extent to which its actions agency-wide have helped mitigate Arctic capability gaps. Coast Guard officials attributed this, in part, to not being able to unilaterally close the gaps. While fully mitigating these gaps requires joint efforts among Arctic partners, the Coast Guard has taken actions in the Arctic that are specific to its missions and has responsibility for assessing the extent to which these actions have helped to close capability gaps. Standards for Internal Control in the Federal Government provide that ongoing monitoring should occur in the course of normal operations and should help ensure that the findings of reviews, such as the capability gaps identified in the previously mentioned reports, are resolved. As a result, we recommended in our June 2016 report that the Coast Guard develop measures, as appropriate, and design and implement a process for systematically assessing the extent to which its actions have helped mitigate Arctic capability gaps. DHS concurred with our recommendations, and in response, the Coast Guard reported that it planned to develop specific measures for some of its Arctic activities and systematically assess how its actions have helped to mitigate the capability gaps for which the Coast Guard is the lead agency, such as icebreaking capacity. We believe that these actions, if implemented, will help the Coast Guard better understand the status of these capability gaps and better position it to effectively plan its Arctic operations. However, we continue to believe that it is important for the Coast Guard to also systematically assess how its actions affect Arctic capability gaps for which it is not the lead, such as communications. Although the Coast Guard may not be the lead for these gaps, assessing the impact of Coast Guard actions for such capability gaps would better enable the Coast Guard to understand the effectiveness of its actions and the status of all capability gaps. Also, as these gaps may affect its Arctic missions, this knowledge may be helpful to the Coast Guard in planning its operations. Our June 2016 report found the Coast Guard has been unable to fulfill some of its polar icebreaking responsibilities with its aging polar icebreaking fleet, and had efforts underway to acquire a heavy icebreaker—which has greater icebreaking capability than a medium icebreaker. Specifically, in 2011 and 2012 when its heavy icebreakers were not active, the Coast Guard was unable to maintain assured, year- round access to the Arctic and did not meet 4 of 11 requests for polar icebreaking services. The Coast Guard reported that increased heavy icebreaking capacity is needed to fully meet requirements in the Arctic and Antarctic regions. A 2010 Coast Guard-commissioned study found that at least six icebreakers—three heavy and three medium—would be required to carry out the Coast Guard’s statutory missions, if the Coast Guard were to fully accomplish all of its polar icebreaking responsibilities. Recognizing the fiscal challenges posed by such a request, Coast Guard officials have stated that obtaining a minimum of two heavy icebreakers is needed to at least maintain the fleet’s self- rescue capability in the event one vessel became beset in ice—a capability the Coast Guard does not currently have. We also found that the Coast Guard initiated a program in 2013 to acquire a new heavy icebreaker to maintain polar icebreaking capability after the Polar Star’s projected service life ends between 2020 and 2023. Currently, the Coast Guard is working to determine the optimal acquisition strategy. To move forward with the acquisition process, the Coast Guard would need to receive funding for an icebreaker—which, according to a 2013 preliminary estimate, would be about $1.09 billion—and ensure that a U.S.-based commercial shipyard would be able to build the vessel. For many years, the Coast Guard’s annual acquisition budget has been allocated primarily to other projects. The President’s fiscal year 2017 budget request outlined plans to accelerate the acquisition process for a heavy icebreaker, so that production activities could commence by 2020. Various factors limit the options available to the Coast Guard to maintain, or increase, its icebreaker capacity. The Coast Guard has reported that the long-term lease of a polar icebreaker is unlikely to result in cost savings when compared with a purchase. Specifically, we reported in June 2016 that two key factors limiting the Coast Guard’s options for acquiring icebreaking capacity are the lack of an available icebreaker that meets agency and legal requirements, and the total cost that would be associated with a long-term lease. Availability. The Coast Guard reported that no existing heavy icebreakers were available to lease or purchase that met both its legal and operational requirements. Specifically, to meet legal requirements, the Coast Guard would need to either purchase or demise charter the icebreaker, as legal requirements associated with several Coast Guard missions prohibit a short-term lease. Specifically, under federal law, to be capable of conducting all of its statutory missions, the Coast Guard must use a public vessel, which federal law defines as one that the United States owns or demise charters. For example, federal law states that the Coast Guard’s Ports, Waterways, and Coastal Security Mission may be carried out with public vessels or private vessels tendered gratuitously for that purpose. Further, federal law provides that no Coast Guard vessel may be constructed in a foreign shipyard. According to the Coast Guard, besides the Polar Star and the Polar Sea, the only existing icebreakers powerful enough to meet the Coast Guard’s operational requirements were built in and are owned by Russia and would not comply with this legal requirement. Budgeting and Total Cost. Budget requirements also affect the Coast Guard’s ability to acquire an icebreaker. For example, Office of Management and Budget (OMB) guidelines require federal agencies to acquire assets in the manner least costly overall to the government. Specifically, for a large acquisition like a heavy icebreaker, OMB Circular A-94 requires the Coast Guard to conduct a lease-purchase analysis based on total lifecycle costs of the asset. To proceed with a lease, the Coast Guard would need to show that leasing is preferable to direct government purchase and ownership. Budget scorekeepers—specifically, OMB, the Congressional Budget Office, and the House and Senate Budget Committees—score purchases and capital leases at the outset of the acquisition. A 2011 preliminary cost analysis prepared for the Coast Guard indicated that the lease option would be more costly to the federal government over an icebreaker’s expected 30-year service life. According to this analysis, the prospective ship owner’s profit rate would increase the overall expense as this profit rate is priced into the lease, such that government ownership would be less costly in the long run. Moreover, because a demise charter requires the lessee to operate and maintain the vessel, the Coast Guard would not be able to outsource crewing or maintenance activities to reduce its operating costs. Previous GAO work on the question of leasing versus buying an icebreaker identified important assumptions in comparing the costs to the federal government and suggested that outright purchase could be a less costly alternative than a long-term vessel lease. Assuming that the cost of building and operating the vessel was the same under both the buy and the lease scenarios, the cost advantage to government purchase over leasing in our previous work was based on two factors. First, the costs of private sector financing under a lease arrangement—which were higher than the government’s borrowing costs—could be expected to be passed on to the federal government in lease payments, thereby increasing the vessel’s financing costs over what they would be under outright government purchase. Second, under a lease arrangement, an additional profit would accrue to the lessor for services related to its retained ownership of the vessel. Anticipating a likely gap of 3 to 6 years in heavy icebreaker capability between the expected end of the Polar Star’s service life between 2020 and 2023 and the deployment of a new icebreaker in 2026, we reported in June 2016 that the Coast Guard is developing a bridging strategy, as required by law, to determine how to address this expected gap (see fig. 2). We reported in June 2016 that the Coast Guard has not determined the cost-effectiveness of reactivating the Polar Sea, and that a Bridging Strategy Alternatives Analysis will assess and make recommendations on whether to reactivate the Polar Sea and whether to further extend the service life of the Polar Star. Coast Guard officials said that they have not established a completion date for this report, but do not anticipate a final decision on the Polar Sea before fiscal year 2017, after which they will evaluate the cost-effectiveness of extending the Polar Star’s life, if necessary. In conclusion, the Coast Guard has made progress in assessing its capabilities in the Arctic and taking steps to address identified capability gaps, but the Coast Guard could do more to systematically determine the progress it has made in helping to mitigate these various gaps. Further, several factors exist that affect the Coast Guard’s options for acquiring a new icebreaker, including both legal and budgetary considerations that suggest a purchase of an icebreaker may be preferable to a long-term lease. Regardless of the acquisition approach, there is a strong likelihood of a 3 to 6 year gap in heavy icebreaking service, which underscores the need for the Coast Guard to move forward with its bridging strategy. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Jennifer Grover at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Dawn Hoff (Assistant Director), Tracey Cross (Analyst-in-Charge), Chuck Bausell, Linda Collins, John Crawford, Michele Fejfar, Laurier Fish, Eric Hauswirth, Carol Henn, Susan Hsu, Tracey King, Jan Montgomery, Jillian Schofield, Katherine Trimble, and Eric Warren. Description Requires the Coast Guard to, in part, establish, develop, maintain, and operate icebreaking facilities on, under, and over the high seas and waters subject to the jurisdiction of the United States; and, pursuant to international agreements, requires the Coast Guard to develop, establish, maintain, and operate icebreaking facilities on, under, and over waters other than the high seas and waters subject to the jurisdiction of the United States. Requires the President to facilitate planning for the design, procurement, maintenance, deployment, and operation of icebreakers as needed to support the statutory missions of the Coast Guard in the polar regions by allocating all funds to support icebreaking operations in such regions, except for recurring incremental costs associated with specific projects, to the Coast Guard. Authorizes the Coast Guard to maintain icebreaking facilities. Requires the Coast Guard to conduct such oceanographic research, use such equipment or instruments, and collect and analyze such oceanographic data, in cooperation with other agencies of the government, or not, as may be in the national interest. Authorizes the Coast Guard to provide and accept personnel and facilities, from other federal and state agencies, to perform any activity for which such personnel and facilities are especially qualified and as may be helpful in the performance of its duties, respectively. Congress finds that the United States has important security, economic, and environmental interests in developing and maintaining a fleet of icebreaking vessels capable of operating effectively in the heavy ice regions of Antarctica. The Department of Homeland Security is required to facilitate planning for the design, procurement, maintenance, deployment, and operation of icebreakers needed to provide a platform for Antarctic research. Congress finds that the United States has important security, economic, and environmental interests in developing and maintaining a fleet of icebreaking vessels capable of operating effectively in the heavy ice regions of the Arctic. Strategic policies Implementation Framework for the National Strategy for the Arctic Region (2016) The Coast Guard is the lead agency for ensuring the United States maintains icebreaking capability with sufficient capacity to project an assured Arctic maritime access, supports U.S. interests in the polar regions, and facilitates research that advances the fundamental understanding of the Arctic. National Security Presidential Directive 66/Homeland Security Presidential Directive 25 (NSPD-66/ HSPD- 25): Artic Region Policy (2009) The Department of Homeland Security and other departments shall “reserve the global mobility of United States military and civilian vessels and aircraft throughout the Arctic region” and “project a sovereign United States maritime presence in the Arctic in support of essential United States interests.” Presidential Memorandum 6646: United States Antarctic Policy and Programs (1982) The Departments of Defense and Transportation (now Department of Homeland Security) shall provide logistical support as requested by the National Science Foundation to support the United States Antarctic Program. Interagency agreements Memorandum of Agreement between Department of the Navy and Department of the Treasury on the Operation of Icebreakers (1965) Navy agreed to transfer all icebreakers to the Coast Guard, and the Coast Guard agreed, among other things, to maintain and operate the U.S. icebreaker fleet, to prepare for contingency or wartime operations in polar regions, to assign icebreakers to the Navy’s operational control for seasonal polar deployments, and to support scientific programs to the extent possible. Memorandum of Agreement between Coast Guard and National Science Foundation (2010) The Coast Guard agreed to provide polar icebreaker support to conduct the resupply of McMurdo Station to support the U.S. Antarctic program and to conduct research in the Antarctic. Memorandum of Agreement between the Department of Defense and Department of Homeland Security on the Use of U.S. Coast Guard Capabilities and Resources in Support of the National Military Strategy (2008/2010) In ice-covered and ice-diminished waters, Coast Guard icebreakers are the only means of providing assured surface access in support of the Department of Defense missions. . This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The retreat of polar sea ice in the Arctic, as reported by the U.S. National Snow and Ice Data Center, combined with an expected increase in human activity there, has heightened U.S. and other nations' interests in the Arctic region in recent years. Growth in Arctic activity is expected to increase demand for services such as search and rescue and maritime navigation support, which can be a challenge to provide given the harsh and unpredictable weather and vast distances that responding agencies must travel to reach the Arctic. The Coast Guard plays a significant role in U.S. Arctic policy and issued its Arctic strategy in May 2013. This statement addresses the extent to which the Coast Guard has (1) assessed its Arctic capabilities and taken actions to mitigate any identified gaps, and (2) reported being able to carry out polar icebreaking operations. This testimony is based on a June 2016 GAO report. GAO reviewed relevant laws and policies and Coast Guard documents that detail Arctic plans, among other things. Detailed information on GAO's scope and methodology can be found in the June 2016 report. GAO reported in June 2016 that the U.S. Coast Guard, within the Department of Homeland Security (DHS), had assessed its Arctic capabilities and worked with its Arctic partners—such as other federal agencies— to mitigate Arctic capability gaps, including communications and training. Although Coast Guard officials stated that the agency's actions, such as testing communication equipment in the Arctic and conducting Arctic oil spill response exercises, have helped to mitigate Arctic capability gaps, the Coast Guard has not systematically assessed the impact of its actions on these gaps. GAO recommended in June 2016 that the Coast Guard develop measures, as appropriate, and design and implement a process, for systematically assessing the extent to which its actions have helped mitigate Arctic capability gaps. DHS concurred with GAO's recommendations, and the Coast Guard reported that it planned to develop specific measures for some of its Arctic activities and systematically assess how its actions have helped to mitigate the capability gaps for which the Coast Guard is the lead agency. While officials stated they are unable to unilaterally close capability gaps for which the Coast Guard is not the lead agency, assessing the impact of Coast Guard actions for such capability gaps would better enable the Coast Guard to understand the effectiveness of its actions and the status of all capability gaps, as well as plan its Arctic operations. GAO's June 2016 report also found that the Coast Guard has been unable to fulfill its polar icebreaking responsibilities with its aging icebreaker fleet, which currently includes two active icebreakers. In 2011 and 2012, the Coast Guard was unable to maintain year-round access to the Arctic and did not meet 4 of 11 requests for polar icebreaking services. With its one active heavy icebreaker—which has greater icebreaking capability—nearing the end of its service life, the Coast Guard initiated a program in 2013 to acquire a new one and is working to determine the optimal acquisition strategy. However, the Coast Guard's efforts to acquire an icebreaker, whether by lease or purchase, will be limited by legal and operational requirements. In addition, current projections show that the Coast Guard is likely to have a 3- to 6-year gap in its heavy icebreaking capability before a new icebreaker becomes operational, as shown below. The Coast Guard is developing a strategy to determine how to address this expected gap. Coast Guard's Heavy Icebreaker Availability and Expected Capability Gap, Present until 2030
The Federal Reserve System is composed of an independent government agency—the Board of Governors (Board)—and 12 regional Reserve Banks, each of which is located in a Federal Reserve district. (See fig. 1.) The Board is responsible for maintaining the stability of financial markets, supervising financial institutions such as bank holding companies and the U.S. operations of foreign banking organizations, and supervising the operations of the Reserve Banks. Unlike the Board, the Reserve Banks are not federal agencies. Each Reserve Bank is a federally-chartered corporation with a board of directors. Unlike federal agencies funded through congressional appropriations, the Board and Reserve Banks are self-funded entities that engage in a variety of activities that generate revenue, such as earnings from lending to financial institutions. The Federal Reserve deducts its costs from these revenues and transfers the remaining amount to the General Fund of the U.S. Treasury (General Fund). In 2012, the Federal Reserve transferred $88.4 billion to the General Fund. Federal Reserve revenues contribute to total U.S. government revenues, and therefore, if its costs can be reduced—such as through more efficient coin-inventory management— more of its revenue could potentially be contributed to the General Fund. The Reserve Banks carry out a variety of functions for the Federal Reserve, including ensuring that coins and notes are available in quantities sufficient to meet the public’s needs. The 12 Reserve Banks provide coins and notes to depository institutions, among other responsibilities. The Federal Reserve’s Cash Product Office (CPO) manages the Reserve Banks’ coin inventory from a national perspective, working closely with the Reserve Banks. For example, CPO places monthly orders for new coins with the U.S. Mint on behalf of the Reserve Banks. Other entities, including the U.S. Mint, coin terminal operators—armored carrier companies such as Brink’s and Dunbar, that hold both Reserve Bank and other customers’ coins in their facilities—and depository institutions play a role in issuing or managing the circulation and distribution of coins. (See fig. 2.) For coins, the Treasury’s U.S. Mint is the issuing authority. The U.S. Mint is financed through a revolving fund and generates revenue through various means including the sale of circulating coins at face value to the Reserve Banks. Revenue in excess of costs— including all costs allocable to the U.S. Mint’s circulating coin program—is transferred to the General Fund. U.S. Mint facilities in Philadelphia and Denver produce and ship new coins for circulation to Reserve Bank offices and coin terminals. Approximately 170 coin terminals are operated by 15 armored carrier companies. Coin terminal operators receive deposits from and fulfill orders of coins for depository institutions on behalf of the Reserve Banks and other customers. As we have previously reported, coin terminals operate at no cost to the government because while they maintain Reserve Bank coin inventories at no charge, they earn revenue from other customers—depository institutions—from the coin transportation and wrapping services they provide. Depository institutions order coins from the Reserve Banks—through an online ordering system called FedLine operated by the Reserve Banks—to meet retailers’ and the public’s demand; depository institutions’ coin orders are fulfilled with new and circulated coins held at Reserve Bank offices or coin terminals. Depository institutions can deposit coins with Reserve Banks when they have more coins than needed to fulfill demand. Depository institutions contract with armored carriers to wrap and deliver the coins to them, ultimately providing these coins to retailers and the general public. The circulating coin inventory consists of coins held by Reserve Banks— both in Reserve Bank offices and coin terminals—and those coins in general circulation for public use.percent ($2.1 billion) of the circulating coin inventory, and 95 percent ($42 billion) of the inventory was in general circulation. As of December 2012, the 28 Reserve Bank offices held about 50 percent of the Reserve Banks’ total coin inventory of pennies, nickels, dimes, and quarters and about 92 percent of the Reserve Banks’ total coin inventory of $1 coins. The 170 coin terminals held the remainder of the Reserve Bank’s coin inventory. In 2012, Reserve Banks held about 5 Based on the Board’s statutory authorities, it is responsible for note issuance, distribution, and processing. For example, the Board is the issuing authority for notes and is also responsible for distributing and authenticating notes. The Treasury’s BEP produces notes to meet the Board’s annual note order. According to the Treasury, at the end of 2012, approximately $1.1 trillion in notes were in circulation and approximately $228 billion in notes were held by the Reserve Banks. In 2012, BEP production accounted for about 32 percent of the circulating note inventory. This is due, in part, to the amount of new notes that need to be replaced each year because they are worn or no longer fit for circulation. The Reserve Banks manage the note inventory through 28 note- processing centers and 10 note distribution locations. CPO also has a role in providing note services, including processing. Similar to the distribution process for coins, depository institutions order notes from the Reserve Banks through FedLine, and then place these notes in circulation to meet the demand of retailers and the public. The Federal Reserve also contracts with armored carriers to transport notes for circulation or storage. When notes are returned by depository institutions as deposits to the Reserve Banks, each note is processed to determine its quality and authenticity; coins do not undergo similar processing. During processing, worn and counterfeit notes are removed from circulation, and the rest are wrapped for storage or re-circulation. The Reserve Banks are responsible for ensuring the efficient distribution and circulation of coins, including the $1 coin, which co-circulates with the $1 note. Legislation has been introduced in Congress to eliminate the $1 note and replace it with the $1 coin. In 2012, we reported that the federal government would receive $4.4 billion net benefits over 30 years if Congress decided to replace the $1 note with the $1 coin.recent savings estimate is lower than the results of our previous similar This most analyses, in part, because the life of the $1 note has increased. The Presidential $1 Coin Act of 2005 requires the Federal Reserve and the Secretary of the Treasury to assess and submit an annual report to the Congress on the obstacles to the efficient and timely circulation of $1 coins, among other things. As we have previously found, while Congress sought to increase the circulation of the $1 coin in recent years, circulation has remained limited, in part, because the $1 note has remained in circulation. Since 2009, the Federal Reserve has made changes to its coin inventory management that includes centralizing the coin management system and establishing a contract with coin terminal operators. Introduced in 2009, the National Coin Inventory Management program centralized the management of the circulating coin inventory under the CPO so that the coin inventory would be consistently managed across the 12 Reserve Bank districts. Previously, each Reserve Bank office set and managed its own inventory levels, resulting in varying levels of inventory held relative to demand. Under the centralized approach, CPO manages the distribution of coin inventory, orders new coins, and acts on behalf of the Reserve Banks in working with stakeholders. The centralized approach to coin inventory management was set up to increase the efficiency of the coin distribution process. Since 2009, Reserve Bank inventories for pennies, nickels, dimes, and quarters have decreased due in part to the centralized program. In particular, from 2008 through 2012, the combined inventory for pennies, nickels, dimes, and quarters decreased 43 percent. (See fig. 3.) CPO officials have attributed these decreases in inventory and coin orders to the 2009 introduction of CPO management of the coin inventory but other factors may have also contributed to the decrease, such as the 2007–2009 financial crisis and recession, which may have affected the public’s demand for coins. As part of the centralized approach to inventory management, in 2009, CPO established national upper and lower inventory targets for pennies, nickels, dimes, and quarters to track and measure the coin inventory. CPO officials noted that these targets help meet their primary goal in managing the nation’s coin inventory: ensuring a sufficient supply of all coin denominations to meet the public’s demand. The upper national inventory target serves as a signal for CPO to reduce future coin orders from the U.S. Mint to avoid the risk of approaching coin-storage capacity limits and the lower national inventory target serves as a signal to CPO that there is a need to increase future coin orders to avoid shortages. According to CPO officials, they view falling below the lower targets as more problematic than exceeding the higher targets because the lower targets are designed to guard against the more serious risk of not having enough coin to meet demand. We analyzed national inventory targets from 2009 to 2012 and found that in most cases the national inventory targets were met. However, inventory exceeded the upper inventory targets in 2009 for nickels and quarters; in 2010 for nickels, dimes, and quarters; and in 2011 for quarters. In addition, the national inventory for pennies was 7 percent below the lower targets in 2012. See appendix II for additional information on coin inventories, orders, and circulation. In addition to considering nationwide coin inventory, CPO reviews the daily inventories of the 200 Reserve Bank offices and coin terminals and has established upper and lower inventory levels to ensure there is a sufficient but not an excessive supply of coins at each location. Given that the coin supply at each location differs depending on that location’s typical volume of coin payments and receipts, each Reserve Bank office and coin terminal is required to hold a minimum 2 weeks of “payable Prior to days” and a maximum 3 weeks of payable days in inventory. 2009, there were no required inventory levels for distribution locations and coin shortages and excesses occurred in specific locations. Since CPO has centralized the management Reserve Banks’ coin inventory, coin terminal operators we spoke with said coin shortages are less common and that they are better able to manage their inventory and provide depository institutions with the denominations they need to fulfill public demand. In managing the coin inventory, CPO determines if coins should be transferred from an area with more coins than needed to fulfill current and future demand or if additional coins should be ordered from the U.S. Mint. (See fig. 4.) To make this determination, CPO uses a proprietary inventory management system that collects data on inventory, receipts, and payments from the approximately 200 coin distribution locations and forecasts expected changes in coin demand for each location. Before transferring coins from one region to another, CPO considers whether a region’s future demand is expected to change and considers typical seasonal shifts in coin demand and local market factors such as coin- recycling operations that could affect the flow of coins. These transfers, known as interbank transfers, according to CPO officials, are the only direct coin transportation cost to Reserve Banks because armored carriers provide local coin transportation and delivery of new coins is provided by the U.S. Mint. Coins are also transferred between Reserve Bank offices and coin terminals within a region when distribution locations need additional coin supply or have excess coin inventory. For example, coins could be moved via interbank transfer from Reserve Bank offices in Seattle or Minneapolis to support distribution locations in Helena, Montana, and coins could be transferred among three distribution locations in Helena, a Reserve Bank of Minneapolis office, and two coin terminals. If there is an insufficient supply of coins to meet demand and transferring coins would not be cost effective, CPO orders new coins from the U.S. Mint. CPO orders new coins each month from the U.S. Mint based on its 2- month rolling forecast of expected demand, as shown above in figure 4. CPO provides a monthly order to the U.S. Mint about 2 months prior to expected delivery (e.g., CPO submits the August order in June), and to help the U.S. Mint prepare for potential future orders, CPO provides estimates of projected demand and new coin orders for up to the next 12 months. After submitting orders to the U.S. Mint, CPO may increase an order or defer shipments to later months based on updated information. In part to respond to these changes, each month the U.S. Mint produces a safety stock of coins. If this stock is not applied to the current order, it is used to fill future orders.expected demand, CPO determines which Reserve Bank will receive new coins, and then Reserve Banks determine which offices and coin terminals will receive them. From 2008 to 2009, new coin orders for pennies, nickels, dimes, and quarters decreased by 79 percent; however, After the new coins are produced, based on since 2010 new coin orders have increased annually. (See fig. 5.) According to CPO officials, the 2009 introduction of centralized coin management led to the reduction in coin inventories and the more recent increases in coin orders reflect a return to a more normal ordering pattern that is closely aligned with payments to circulation and receipts from circulation. However, other factors may have also contributed to decreased new coin orders in 2009, such as the 2007-2009 financial crisis and recession, which may have affected the public’s demand for coins. In 2011, on behalf of Reserve Banks, CPO developed and negotiated a contract with coin terminal operators, which standardized procedures and internal controls for the storage and handling of coins across all Reserve Bank districts. Prior to this contract, each Reserve Bank negotiated its own contract with coin terminal operators that operated in its district. We spoke with 5 of the 15 coin terminal operators, and they reported satisfaction and efficiency gains with the standardized contract and CPO’s centralized management. For example, one coin terminal operator told us that the Federal Reserve’s centralized approach allows coin terminal operators to manage their business proactively rather than reactively. Among other things, the contract requires coin terminals to use FedLine to order new coins and track changes to Reserve Bank coin inventory held by the coin terminal. Coin terminal operators have told us that FedLine works more effectively than the earlier ad-hoc communications to order coin and track inventory that preceded it. In 2012, Reserve Bank costs related to coin management were about $62 million or 14 percent of the estimated $449 million that coins indirectly cost the U.S. government. These costs included CPO’s administration, coin handling, and interbank coin transfer costs. The remaining 86 percent of U.S. government costs include about $387 million for the U.S. Mint’s production and distribution of new coins. In addition, the government earns a return on the issuance of coins to the extent that production and distribution costs are less than the face value of the coins put into circulation—this value to the government is known as seigniorage. The U.S. Mint reported that in 2012 the government received about $106 million in seigniorage because the face value of coins produced was $493 million, and the U.S. Mint’s cost of producing and distributing them was $387 million. The Federal Reserve’s 2012–15 strategic plan includes an objective to use financial resources efficiently and effectively. In addition, according to COSO, as part of the internal control process management should ensure that operations, such as managing an inventory, are efficient and cost effective, and this process includes monitoring costs and using this information to make operational adjustments. To monitor costs related to coin and note operations, CPO officials said they review currency management costs—which include costs related to both coins and notes—at the national level because individual Reserve Banks may vary in their accounting for operational costs related to coins and notes. When we reviewed currency management costs at the national level using data provided by CPO, we found that from 2008 through 2012 total annual Reserve Bank currency management costs increased by 23 percent. (See fig. 6.) Cost information for coins and notes is available separately; however, CPO does not separately monitor coin costs. Our analysis of coin management costs, using CPO data, indicates that coin management costs increased by 69 percent from 2008 through 2012. CPO officials attributed the increase in coin management costs to support costs, which increased by 80 percent during that time period, from $24.5 million in 2008 to $44.1 million in 2012.II.) Support costs include utilities, facilities, and information technology as well as other local and national support services such as CPO’s services. According to CPO officials, support costs are influenced by a variety of factors, including the number and size of operating units at each location. Also, with the consolidation of some operating units, such as check processing, other operations, including coin management, have absorbed (See fig. 18 in app. a higher percentage of support costs, according to these same officials. They further explained that direct costs—which include personnel and equipment—represent their primary measure of Reserve Bank coin operation costs. We found that direct costs for coin operations increased by 45 percent during this period; about $5 million across the 28 Reserve Bank offices. According to CPO officials, the increase in direct costs can be largely attributed to an increase in personnel costs, which may be influenced by volume of coin bags handled onsite and the number of coin By not separately monitoring coin costs, the Federal terminals serviced.Reserve may be missing opportunities to assess and improve the cost- effectiveness of its coin operations. In addition, we also reviewed coin management costs at each Reserve Bank and found that the rates of increasing coin-management costs differ across Reserve Banks. Using data provided by CPO on individual Reserve Banks’ costs, we found that from 2008 through 2012, coin management costs increased for all Reserve Banks with the increases To account ranging from a low of 36 percent to a high of 116 percent. for variations in the volume of coins handled by individual Reserve Banks, we also reviewed the average cost per bag handled by Reserve Banks and found that in 2012 it ranged from about $2 to $57 per coin bag. CPO officials attributed variations in Reserve Bank coin management costs to different operational practices such as outsourcing coin handling to coin terminals and differences in direct and support costs.outsourcing coin handling may decrease some costs as personnel are not required at the Reserve Bank location to perform these services, it may also increase other costs related to daily management and periodic auditing of these outsourced services. Without taking steps to identify and share cost-effective coin management practices across Reserve Banks, While the Federal Reserve may be missing opportunities to support more efficient and effective use of Reserve Bank resources. Moreover, more efficient management of the coin inventory may contribute to cost savings and additional funds returned to the General Fund. In managing the circulating coin inventory, we found that the Federal Reserve follows key practices for collaboration and risk management and partially follows key practices for performance metrics, forecasting demand, and system optimization. To effectively manage inventory, private and governmental organizations involved in production and distribution operations use supply-chain and operations-management practices. We identified five key supply-chain practices—collaboration, risk management, performance metrics, forecasting demand, and system optimization—and selected supporting characteristics applicable to coin inventory management to assess the Federal Reserve’s management of the circulating coin inventory. (See table 1.) These key supply-chain management practices are closely related, and improvements or shortfalls in one practice may contribute to improvements or shortfalls in another practice. In addition, establishing, documenting, and following these practices and their supporting characteristics contribute to a more effective inventory management system. The Federal Reserve follows the key practice of collaboration through its relationships with supply chain stakeholders, including Reserve Banks, coin terminal operators, depository institutions, and the U.S. Mint. CPO has developed policies and guidelines governing relationships with each of its partner entities. For example, as previously discussed, in 2011 CPO introduced a contract with coin terminal operators, which includes a manual of operations that standardized procedures and internal controls for handling Reserve Bank coins. CPO has also agreed upon roles and responsibilities with partner entities. For example, U.S. Mint officials told us that as part of the coin supply chain, they deal almost exclusively with CPO to manage day-to-day operations and interbank coin transfers. The U.S. Mint arranges coin transportation across Reserve Banks, and CPO reimburses the U.S. Mint for the cost of this transportation. CPO has multiple mechanisms, such as stakeholder working groups, for sharing information related to the circulating coin inventory with partner entities. Reserve Banks, coin terminal operators, depository institutions, and the U.S. Mint reported to us that they are generally satisfied with their relationships with CPO. The Federal Reserve follows the key practice of risk management because it has identified sources of potential disruptions, assessed the potential impact of risk, and developed plans to mitigate risk at multiple levels of operations, including Reserve Bank, FedLine, and coin inventory management operations. For example, the lower national-inventory targets are set based on the highest consecutive 10-day gross pay period from 2009 to 2012 because, according to CPO officials, the U.S. Mint can produce and deliver new coins within 10 days. Therefore, the lower inventory target guards against a coin shortage if a disruption prevented circulating coin payments from depository institutions. In addition, risk management for the circulating coin inventory is built into the Federal Reserve’s overall risk management and contingency protocols. For example, each Reserve Bank has a designated “buddy bank” to perform its functions—including coin circulation activities—if it is unable to operate due to a disruption. According to CPO officials, as soon as disruption is anticipated, they begin planning with stakeholders operating in the affected area, including Reserve Bank officials, coin terminal operators, and depository institutions. Continuity plans are also in place for Federal Reserve services such as FedLine. The FedLine website includes information on the types of potential disruptions, such as system outages or closures of Reserve Bank or coin terminal operations by inclement weather or other events, and what depository institutions and armored carriers should do in the event of a disruption. The Federal Reserve partially follows the key practice of performance metrics because its use of performance metrics is limited to inventory targets and it has not developed other goals or metrics related to coin- supply chain management. CPO has established, tracks, and annually reevaluates performance metrics in the form of upper and lower inventory targets for pennies, nickels, dimes, and quarters. In addition, CPO closely monitors “net pay” to measure how well they are meeting general demand nationwide and at individual locations. However, CPO has not established additional management goals or metrics to measure other aspects of its management such as costs because, as discussed earlier, CPO’s primary goal is to ensure a sufficient supply of all coin denominations are available to meet the public’s demand. Federal agencies have been required to develop performance goals and measure and report on their progress in achieving these goals since the Government Performance and Results Act (GPRA) was enacted in 1993.has chosen to voluntarily comply with the spirit of the act. In addition, our previous work on managing results has shown that agencies should identify goals and establish a suite of performance metrics to determine Although the Federal Reserve is not covered by GPRA, the Board whether they are meeting those goals. We have identified customer satisfaction, efficiency, and costs, among others, as performance metrics that can be used to measure an agency’s progress towards its goals. Moreover, costs are among the common performance measures used in supply chain management. The Australian, Austrian, and Canadian mint and central bank officials we interviewed have established multiple performance goals and metrics for their coin inventory management. For example, the Royal Australian Mint and its commercial bank partners have established targets to reduce coin holdings that will result in reducing the cost of freight and other related coin expenses. Further, the Royal Australian Mint tracks and monitors coin management costs to ensure that they are progressing towards their targets. We found that the Federal Reserve partially follows the key practice of forecasting demand because it forecasts future coin demand and uses this information to make decisions, but does not systematically track the accuracy of its monthly forecasts compared to the final coin orders. As discussed previously, CPO has a process in place to forecast demand by tracking current inventory, payments, and receipts and using this information to calculate expected future demand. These forecasts are then used to plan and manage the circulating coin inventory, including decisions on coin transfers and new orders. CPO officials told us they review their annual forecasts, and have found their forecasts for payments and receipts are within 10 percent of actual orders for pennies, nickels, dimes, and quarters. However, CPO has taken minimal steps to assess monthly forecast accuracy. CPO officials told us that they compare their initial coin orders to net pay, but do not track the accuracy of their monthly forecasts because seasonal shifts in coin demand make reviewing annual trends more useful for their purposes. Our analysis of initial monthly CPO coin orders and actual U.S. Mint coin shipments from 2009 through 2012 indicates that initial orders were consistently less than the final orders (i.e., U.S. Mint shipments of new coin). APICS—an operations management industry association that offers professional certifications—recommends that forecasting results must be continuously monitored and a mechanism should be in place to revise forecasting models as needed and that if the forecast consistently exhibits a bias, the forecast should be adjusted to match the actual demand. According to the Logistics Management Institute, accurate forecasts result in effective and efficient inventories, whereas inaccurate forecasts often cause inventory excesses and shortfalls. In addition, inventory management experts told us that accurate forecasts would make the Federal Reserve better able to respond to changing trends in coin demand, as discussed later in this report. Taking additional steps to assess forecast accuracy could help CPO identify the factors influencing forecast accuracy and then adjust forecasts to improve accuracy. Although CPO has multiple systems that provide information from across the supply chain, we found that it partially follows the practice of system optimization because this information is not currently used to better understand some additional aspects of its coin management activities to optimize the efficiency of the circulating coin inventory. As discussed earlier, CPO has access to information and resources from across the circulating coin supply chain. This access includes information on actual and forecasted coin demand and Reserve Bank coin management costs, as well as information from stakeholders, such as new coin delivery information from the U.S. Mint. In addition, the Federal Reserve has taken some steps to identify areas where they could gain incremental efficiency improvements such as the centralized management of the coin inventory and note-processing efficiencies, which we discuss later. However, CPO could improve its use of information and resources to identify and implement efficiencies within the supply chain by using the range of information available to establish and track performance metrics to measure progress. For example, the U.S. Mint’s monthly production of new coins could be more efficient with improvements to the accuracy of initial new coin orders. Currently, the U.S. Mint produces a safety stock, in part, to ensure it is able to produce enough coins to fulfill CPO’s final, adjusted order. More accurate monthly forecasts and coin orders could lessen the need for the U.S. Mint to produce safety stock, and thus help to optimize the efficiency of the supply chain. In addition, if CPO used its information on Reserve Bank coin management costs and its knowledge of coin management operations across the 12 Reserve Banks, it could help to identify factors that have contributed to varying coin-management costs at individual Reserve Banks and opportunities for cost savings that could limit rising costs. Better information related to forecast accuracy and costs could also aid CPO in using its information and resources to identify inefficiencies and support system optimization. Further, optimizing U.S. Mint and individual Reserve Bank operations could potentially contribute to reducing U.S. Mint or Federal Reserve costs related to circulating coins, a reduction that could potentially increase the amount of money returned to the General Fund. To collect data and information on potential changes in the demand for currency, the Federal Reserve has conducted studies and outreach with groups such as depository institutions and merchants, and found a general consensus that the use of currency may decline slightly in the near term. According to the Federal Reserve, this expectation is due, in part, to an increase in alternative payment options. Federal Reserve officials we met with described how interrelated factors make it difficult to predict long-term (i.e., 5 to 10 years) currency demand. The factors the Federal Reserve identified as influencing currency demand and the mechanisms used to supply that demand include the relative costs and benefits of currency versus other forms of payment, the level of economic activity and other economic conditions, technological change, and regulations and policies. Federal Reserve officials explained that how these factors will play out in the years ahead is unknown, and therefore the magnitude of the change in demand for currency is uncertain. According to many agency officials, stakeholders, and foreign government officials we spoke to, while there may be changes in the use of various types of payments in the coming years, the effect on currency demand is likely a gradual decline. Thus far, Federal Reserve officials noted that their research indicates that the amount of currency in circulation continues to rise and currency usage is currently steady. Federal Reserve studies and data also indicate electronic payment options have increased over time. For example, in a 2010 study, the Federal Reserve reported that the number of non-currency payments— including credit card and debit card payments, among others—increased 4.6 percent per year from 2006 to 2009. Since 2006, the debit card has surpassed the check as the most used non-currency payment method, with the number of debit card transactions increasing 14.8 percent per year from 2006 to 2009. Over the same period, check volume decreased 7.1 percent per year. Many stakeholders noted that even with changes in non-currency payments, the effect on currency demand is likely to be gradual. For example, BEP officials stated that the increasing use of electronic payments has not had a major impact on demand for currency. Further, these officials said demand for currency may even increase and that growing use of electronic and mobile payments will likely affect checks more than currency. Royal Australian Mint officials also stated that while the use of electronic payment methods is expected to increase as new technologies emerge, how fast and when such a change will occur is unclear. While data on credit card usage and electronic transactions are available, the volume of currency transactions is difficult to measure. Specifically, other data sources cannot be used to determine what portion of the currency in circulation is being used for transactions—for example, some currency in circulation are stored, as opposed to being deposited or used for commerce. As the use of non-currency payment methods has increased, the quantity of notes in the economy has also grown. According to Federal Reserve data, the number of transactional notes (i.e., $1s, $5s, $10s, and $20s) in circulation has increased steadily from approximately 19.6 billion pieces in 2009 to 21.8 billion pieces in 2012. During this period, the value of currency in circulation rose approximately 26 percent. (See fig. 7.) Coin terminal operators noted that there will be a continued need and demand for coins and notes, particularly because there will always be some portion of the population who will only use currency, such as individuals without bank accounts, or “the unbanked.” Federal Reserve officials expect that their current procedures and approach to managing the coin and note inventory—including their forecasting and monitoring of the coin inventory targets discussed previously—will allow the agency to accommodate gradual shifts in demand. For example, to respond to increasing or decreasing demand for coins and notes, CPO can decrease or increase coin orders from the U.S. Mint and the Board can decrease or increase note orders from BEP. Specifically, in 2012 the Federal Reserve assessed how notes received from circulation are processed and determined that current operations can handle a significant change in volume—either an increase or a decrease—without significant change to the operating model and staffing levels. Nonetheless, according to the officials, CPO is continually working to identify ways to streamline its processes to be more flexible and adaptable to changes. In addition, CPO and the Reserve Banks have established risk management plans and procedures to address the effects associated with a short-term, unexpected change in coin and note demand—such as from a natural disaster. Other experts and foreign officials agree that well-managed currency systems are capable of handling major trend-based changes. According to inventory management experts we consulted, a key to effectively managing a supply chain is dependable forecasts. Effective forecasts would take both trends and cyclical demand changes into account. Therefore, combining forecasts with continual tracking of demand and inventory levels should allow the Federal Reserve to be able to adapt to any major trend-based changes in coin and note demand. As discussed earlier, this makes accurate forecasting by the Federal Reserve even more important. Royal Australian Mint officials also stated that by routinely monitoring activities such as coin trades and transfers between and among institutions, any changes in these or other measures would give officials insight on specific ways in which demand was changing, which would ultimately allow for adjustments in the volume of coins being circulated, responding to public demand. While Federal Reserve officials we met with indicated their current processes should enable them to adapt to gradual changes in coin and note demand, a significant and unexpected change could affect the management of the coin and note inventories. According to Federal Reserve officials and some coin terminal operators we met with, the elimination of the $1 note for the $1 coin would be the type of event that could significantly affect demand for a specific coin denomination, and it would have implications for the overall management of both the coin and note inventory. For example, Federal Reserve officials stated that replacing the $1 note with the $1 coin would likely require increases in coin vault space and manpower as well as improvements in coin authentication technologies. CPO officials also said that if a large decline in coin usage occurs, they would adapt their management of the inventory in response. For example, if demand for coins were to decrease suddenly, leaving too many coins in circulation, the Federal Reserve would first stop ordering new coins from the U.S. Mint, and would then focus on storing the excess coin inventory. Coin attrition would reduce this inventory over time, and CPO officials anticipate that they would have sufficient storage capacity available to accommodate the excess coins. CPO officials told us that inventory levels would need to be well in excess of the existing targets before they would have an effect on storage capacity and related costs. Further, if public demand for coins decreased substantially, additional storage could be needed to accommodate and store the coins returned by depository institutions to the Reserve Banks. Coin terminal operators also did not expect a decrease in coin demand significant enough to exceed their storage capacity. In 2010, CPO began to develop a long-term strategic framework to consider potential changes to currency demand over the next 5 to 10 years and how this change could affect CPO’s operations. According to Federal Reserve officials, while the future is inherently uncertain, this framework is an internally focused effort to help them share information, refine internal operations, and monitor trends. The Federal Reserve has not established deadlines for completing this effort, but CPO officials said they continue to share what they are learning from data and other research, and they continue to assess current coin distribution and note- processing operations. The following activities are components of these efforts: Engaging system stakeholders. As an initial step, CPO officials interviewed stakeholders such as depository institutions, armored carriers, equipment vendors, merchants, and alternative payment providers to collect information and perspectives on potential changes in currency demand. Groups such as the Customer Advisory Council and the Cash Advisory Group also continue to serve as a mechanism for the CPO to coordinate and collect data and information on currency activity nationwide. Examining internal operations for coin distribution and note processing. CPO officials have sought to increase the efficiency of coin-distribution and note-processing operations to better position the agency to adapt to future changes in demand. For example, the Federal Reserve enhanced its note sensor technology in 2010, improving the efficiency of note processing operations by nearly 10 percent, according to CPO officials. CPO officials told us that they continue to look for other areas to incrementally improve and add additional flexibility in their operations. Incremental improvements such as those related to note processing activities could contribute to supply-chain system optimization, as discussed earlier. Conducting research. The Federal Reserve continues to conduct research and analysis related to Reserve Bank operations. For example, one Reserve Bank is conducting the most recent in a series of triennial payment studies, which it expects to complete later this year, to determine the current volume and composition of electronic and check payments, reporting trends in these payments since 2009. As part of a broader effort to look at trends in various payment types, another Reserve Bank is examining the detailed spending habits of a selection of consumers, who were asked to document their transactions and payment decisions over a period of time in a shopping “diary.” Because, as previously noted, determining how much of the currency in circulation is being used for transactions is difficult—this type of study can help officials better understand currency use in the United States. The diary study is expected to be completed and published later this year, and CPO officials told us they plan to assess and incorporate relevant findings into their currency management operations. Australian, Austrian, and Canadian officials we interviewed are also exploring the potential impact of alternative payment technologies and continue to analyze past trends and to collect new data to inform these research efforts. For example, Austrian and Canadian officials have also conducted diary studies to better understand individuals’ use of various payment options. Collecting detailed consumer payment information through these types of studies can help officials in these countries better understand consumers’ payment and currency management habits. The Federal Reserve manages the $1 coin inventory as it does all other coin denominations, overseeing the distribution of coins produced by the U.S. Mint and those already in circulation. The Federal Reserve’s goal, based on its statutory responsibilities, is to ensure that sufficient supplies of coins are available to meet demand nationwide. According to Federal Reserve officials, from an operational perspective, the Federal Reserve fills all orders it receives and treats all denominations of coins and notes the same. Reserve Banks held approximately $1.4 billion in $1 coin inventory as of March 2013. According to the Federal Reserve’s 2013 Annual Report to Congress, this inventory level is sufficient to meet the demand for $1 coins for more than 40 years assuming a continuation of the current level of demand. Federal Reserve officials told us the annual net demand for $1 coins was approximately $51 million per year from 2010 through 2012. Inventory increased steadily with the issuance of the Presidential Coin series beginning in 2007, but has leveled off since the 2011 Treasury decision to cease production of new $1 coins for circulation (see fig. 8). In addition, $1 coin payments to depository institutions have declined from $804 million in 2007 to $245 million in 2012, while receipts from depository institutions generally increased until 2012. In 2012, because there was less public demand for $1 coins than were in circulation, Reserve Banks received more $1 coins back than they paid out, as indicated by net pay in figure 8. According to Reserve Bank officials, depository institution representatives, coin terminal operators, and vending machine industry representatives we met with, $1 coins are readily available to the public throughout the country. Most of the officials and representatives told us that they do not have problems obtaining $1 coins or supplying them to their customers, but emphasized that there is low public demand for these coins. For example, Reserve Bank officials in the Cash Advisory Group said that $1 coins are available at their locations nationwide and that they are able to fill customers’ orders as needed. Depository institution representatives on the Customer Advisory Council told us their depository institutions routinely fill $1 coin orders for organizations with standing orders—such as transit agencies or vending machine companies—but that requests from the general public are rare. A Federal Reserve coin terminal we visited also had an inventory of $1 coins, and officials there told us that while they generally receive $1 coin deposits, withdrawals are rare. In contrast, representatives from the Dollar Coin Alliance said that there is limited commercial and public access to the $1 coin and that some alliance members have had difficulty obtaining the coins from depository institutions. According to these representatives, the Federal Reserve’s treatment of $1 coins—in particular, a limited ordering period for new $1 coins featuring a specific president—has hampered successful circulation. However, other depository institution and industry representatives we met with did not identify similar access or availability issues. Beginning in 2007, in response to a requirement in the Presidential $1 Coin Act of 2005, the Federal Reserve took steps to identify and overcome barriers to circulation of the $1 coin early in the Presidential $1 Coin program. The Presidential $1 Coin Act requires Treasury and the Federal Reserve to identify, analyze, and overcome barriers to the “robust circulation” of $1 coins, including improved methods of distribution and circulation, and improved public education and awareness campaigns. Beginning in 2007, the Federal Reserve met regularly with depository institution representatives to gather feedback about demand for $1 coins and identify potential barriers to circulation. In its 2007 Annual Report to Congress on the Presidential $1 Coin Program—a statutorily required annual report—the Federal Reserve outlined actions it had taken to eliminate identified barriers, including developing $1 coin distribution plans, establishing a special ordering period for new coins along with special packaging and order sizes, and communications with industry and other federal agencies.conducted national and local outreach with coin user groups to gather input and help plan for the introduction of the new coins. According to Federal Reserve officials, most efforts began in 2007 and continued until the 2011 Treasury decision to cease production of the $1 coin for circulation. In addition, the Federal Reserve and U.S. Mint While the Federal Reserve took steps to overcome barriers to the circulation of the $1 coin to meet existing demand, according to Federal Reserve officials, it can do little else given the $1 coin is no longer produced for circulation and the agency’s statutory responsibilities focus on ensuring $1 coins are available to meet demand, not on taking steps to change demand. The Federal Reserve’s 2007 Annual Report to Congress reported that the U.S. Mint and stakeholder feedback identified the co-circulation of the $1 coin with the $1 note as the most significant barrier to improved circulation of the $1 coin. In addition, many depository institutions, coin terminal operators, experts, and foreign officials we met with identified the $1 note as a barrier to the increased circulation of the $1 coin and mentioned that eliminating the $1 note would increase demand for the $1 coin. However, neither the 2007 Annual Report nor subsequent annual reports identified the $1 note as a barrier to the $1 coin. According to Federal Reserve Legal Division’s attorneys, they do not consider the $1 note to be a barrier because they do not view co- circulation as limiting the circulation of the $1 coin. Rather, Federal Reserve officials noted the $1 note is an alternative to the $1 coin that the public freely chooses—that is, their view is that the $1 coin is fully available to the public and its circulation is thus at the level that the public demands. According to Federal Reserve officials, the Federal Reserve’s authority does not extend to promotion, and therefore the agency is not likely to take unilateral action to promote wider circulation of the $1 coin. If this is the case, congressional actions would likely be the only feasible action that could replace the $1 note with the $1 coin. Consistent with the actions of other countries, we have previously recommended that the Congress replace the $1 note with a $1 coin due to the financial benefit the government would receive from the replacement. As we found in our prior work, other countries that have replaced a low-denomination note with a coin, such as Canada and the United Kingdom, stopped producing the note. Officials in these countries noted this step was essential to the success of their transition to the coin and that, with no alternative to the note, public resistance dissipated within a few years. Australian and Canadian Mint officials also explained that they took steps in advance of the issuance of their $1 coins to facilitate public adoption. For example, Royal Canadian Mint officials said that public outreach, stakeholder collaboration, and removing the $1 note from circulation were key elements of a successful transition to the $1 coin in Canada. In its 2011 Annual Report to Congress, the Federal Reserve stated that stakeholders and depository institution representatives reported through their routine meetings that the $1 coins continue to be easy to order and that its communications about the program have been effective, but that transactional demand for $1 coins has not increased since the start of the program and overall demand continues to come primarily from collectors. Federal Reserve officials told us they continue to discuss the $1 coin as necessary with groups such as the Customer Advisory Council, but that they do not intend to take any additional actions. The Board believes the suspension of the minting of the Presidential $1 coin for circulation makes the annual report no longer necessary and, in its 2012 Annual Report to Congress, proposed the elimination of the annual reporting requirement. Since 2009, on behalf of the Reserve Banks, the Federal Reserve has taken steps to standardize its management of the circulating coin inventory from a national perspective. Generally, these efforts have contributed to improvements, such as reductions in national coin inventories and orders, and stakeholder satisfaction with the Federal Reserve’s new approach. The Federal Reserve’s current strategic plan calls for the efficient and effective use of financial resources that could lead to more efficient operations and potentially cost savings. However, some issues remain. Reserve Bank coin management costs have risen since 2008 and CPO has not taken steps to systematically assess factors influencing direct and support costs related to coin management and assess whether opportunities exist to isolate elements of their coin inventory management that could lead to cost savings across the Reserve Banks. Interrelated key practices related to the Federal Reserve’s management of the circulating coin inventory indicate opportunities to advance the use of performance information to establish and monitor additional performance goals and metrics and to improve processes for forecasting demand. For example, establishing goals and metrics and tracking progress toward those two will allow the Federal Reserve to better ensure that multiple aspects of its coin management activities are being monitored and will help agency officials identify and document other program impacts or to determine where additional efficiencies could be gained. Taking steps to assess monthly forecasts to improve accuracy can also lead to overall improvements in the coin supply chain. Developing and tracking performance information and assessing forecasts—key practices used by private and governmental organizations to effectively manage their inventories—could ultimately help identify cost savings for Reserve Banks or improve the efficiency of U.S. Mint coin production, which may in turn result in more money returned to the General Fund, contributing to U.S. taxpayer savings. To ensure efficient management of the circulating coin inventory, we recommend that the Board of Governors direct CPO to take the following three actions: develop a process to assess the factors that have influenced increasing coin operations costs and differences in costs across Reserve Banks and a process to use this information to identify practices that could lead to cost-savings; establish, document, and annually report to the Board performance goals and metrics for managing the circulating coin inventory, (e.g., Reserve Bank coin management costs) and measure performance towards those goals and metrics; and establish and implement a process to assess the accuracy of forecasts for new coin orders and revise the forecasts as needed. We provided a draft of this report to the Chairman of the Board of Governors of the Federal Reserve System and the Secretary of the Treasury for review and comment. In written comments, reproduced in appendix IV, the Federal Reserve generally agreed with the report’s recommendations. Treasury had no comments. We are sending copies of this report to the appropriate congressional committees and the Federal Reserve, U.S. Mint, and BEP. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report addresses the following questions: (1) How does the Federal Reserve manage the circulating coin inventory and what are the coin management costs? (2) To what extent does the Federal Reserve follow key supply-chain management practices in managing the circulating coin inventory? (3) What actions has the Federal Reserve taken to respond to potential changes in demand for coins and notes? (4) What actions has the Federal Reserve taken with regard to the circulation of the $1 coin, and what more, if anything, could it do? To address these questions we met with federal agency officials, foreign officials, industry and academic experts, and currency industry representatives. (See table 2.) We selected industry and academic experts with supply-chain or coin-inventory management expertise. We selected Australia, Austria, and Canada as countries with experiences relevant to our review—such as replacing low denomination notes with coins, or recent implementation of coin-inventory management process improvements. These countries were selected for illustrative purposes and are not intended to be used as benchmarks for direct comparisons to the Federal Reserve’s management of the circulating coin inventory. We obtained documents from and conducted interviews with Federal Reserve officials to obtain information about the agency’s processes for managing the distribution of the circulating coin inventory. We also visited a Federal Reserve coin terminal in White Marsh, Maryland. In addition, we reviewed literature and our prior reports related to coin inventory management. To review the Federal Reserve’s management of the circulating coin inventory and identify costs associated with managing this inventory, we interviewed Federal Reserve, United States Mint (U.S. Mint) officials, and industry representatives—such as National Armored Car Association members and depository institution representatives on the Cash Product Office’s (CPO) Customer Advisory Council (a group established to provide input on coin and note operations). To assess management operations related to the circulating coin inventory, we used the Federal Reserve’s Strategic Framework 2012-2015 and the Committee of Sponsoring Organizations of the Treadway Commission’s (COSO) Internal Control—Integrated Framework. We also obtained and analyzed coin inventory and production data for 2004 through 2012 from the Federal Reserve, U.S. Mint production data from 2010 through March 2013, and Reserve Bank coin and note management cost data from 2008 through 2012. For example, we reviewed Federal Reserve data for monthly coin forecasts, annual coin inventory levels, analyzed data on monthly coin orders and shipments for bias, and compared the overall and individual Reserve Bank costs related to coin and note operations. To assess the extent to which the Federal Reserve follows key supply- chain and inventory-management practices, we developed and validated criteria with stakeholder and expert input. To develop and define practices common to efficient supply chain and inventory management and applicable to the circulating coin inventory in the United States, we reviewed supply chain management and operations management literature on leading practices, our past defense inventory management and Government Performance and Results Act reports, and academic literature.supply-chain management practices relevant to coin inventory management. We identified five key practices: collaboration, risk management, performance metrics, forecasting demand, and system optimization. For additional information on these key practices, see appendix III. The supporting characteristics associated with the key practices were selected based on knowledge of circulating coin management. Selected academic and industry experts in operations and supply chain management and the circulating coin inventory as well as foreign mint or central bank officials from Australia, Austria, and Canada validated that the selected key practices, definitions, and supporting characteristics were relevant to coin inventory management. We also worked with industry experts to identify a selection of We compared the Federal Reserve’s inventory management practices to the five key practices by assessing the extent to which the Federal Reserve met individual supporting characteristics for each practice. The individual assessments for the supporting characteristics served as the basis for the overall assessment for each key practice. Specifically, after we assessed the selected supporting characteristics, we made an overall assessment for each of the five practices using the following scale: Followed or substantially followed—plans, policies, or processes have been developed and implemented properly for all or nearly all of the supporting characteristics. Partially followed—plans, policies, or processes have been developed and implemented properly for some of the supporting characteristics. Minimally or not followed—plans, policies, or processes are lacking for all or nearly all of the supporting characteristics. To determine the extent to which the Federal Reserve followed these key practices and supporting characteristics, we reviewed agency documents and interviewed officials from the Federal Reserve and U.S. Mint. We also met with industry stakeholders (e.g., depository institution representatives and coin terminal operators) and academic and industry experts to discuss their views on the Federal Reserve’s management of the circulating coin inventory and working relationships with other entities in the circulating coin supply chain. To identify actions the Federal Reserve has taken to respond to potential changes in demand for coins and notes, we interviewed officials from the Federal Reserve, U.S. Mint, and Bureau of Engraving and Printing (BEP). We also interviewed depository institution representatives on the Customer Advisory Council, academic experts and industry representatives including coin terminal operators and Coinstar. We also interviewed government officials in Australia, Austria, and Canada to obtain their perspectives on potential changes in future cash demand. In addition, we reviewed Federal Reserve research and analysis pertaining to electronic and other payment methods including reports and documentation related to the CPO’s long-term strategic framework and the Retail Payment’s Office triennial payments studies from 2007 and 2010. We also interviewed Federal Reserve officials who have worked on these studies and obtained information about additional research efforts underway related to examining and preparing for changes in future demand for coins, notes, and other payment methods. To identify actions taken by the Federal Reserve regarding the circulation of the $1 coin, we obtained perspectives on the availability and use of $1 coins from Federal Reserve and Reserve Bank officials. We also met with industry stakeholders and coin user groups—including depository institution representatives, the National Automated Merchandising Association, the Dollar Coin Alliance, Coinstar, and selected industry experts. We reviewed Federal Reserve and U.S. Mint responsibilities related to the circulation of the $1 coin, including those outlined in the Presidential $1 Coin Act of 2005 and Native American $1 Coin Act. To identify actions taken to identify, analyze, and overcome barriers to the circulation of the $1 coin, we reviewed the Federal Reserve’s Annual Reports to Congress on the Presidential $1 Coin Program for 2007 through 2013 and interviewed Federal Reserve officials. In addition, we interviewed selected foreign government officials from Australia, Austria, and Canada to identify examples of actions taken to promote low denomination coins and to enhance the circulation of these coins in other countries. We also analyzed Federal Reserve data on $1 coin inventories, U.S. Mint orders, payments, and receipts from 2007 through 2012. We assessed the reliability of data used in this report by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purpose of this report. We conducted this performance audit from March 2013 through October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In 2009, the Federal Reserve’s Cash Product Office (CPO) established national upper and lower inventory targets for pennies, nickels, dimes, and quarters to track and measure the coin inventory. National upper and lower inventory targets are reviewed and updated annually. In 2013, the upper national inventory targets were set based on the average peak Reserve Bank coin inventory from 2009 to 2012, and the lower national targets were set based on the 10 consecutive days from 2009 to 2012 with the most coin payments to depository institutions. Figures 9 through 12 present the Reserve Bank inventories of quarters, dimes, nickels and pennies from 2009 through 2012 and upper and lower national inventory targets from 2009 through 2013. Coin receipts from depository institutions and Reserve Bank payments to depository institutions fluctuate throughout the year—reflecting changes in the public’s spending patterns. (See fig. 13.) For example, in December 2012, the Federal Reserve paid about $479 million in coins to depository institutions (payments) and received about $452 million in coins from the depository institutions (receipts); this difference between payments and receipts is also referred to as net pay. Net pay is the difference between coins paid to depository institutions and coins received from depository institutions over a given period of time. Net pay greater than zero (positive) indicates that the Federal Reserve paid more coins to depository institutions than it received from depository institutions during that period (e.g., month or year). In addition, positive net pay indicates that additional coins—coins transferred from areas with negative net pay or new coins—are needed to meet demand. CPO uses national data on net pay, inventory, and expected changes to demand to make inventory management decisions such as where to transfer coin within and between Reserve Bank districts. When coin payments to depository institutions are greater than coin receipts from depository institutions, CPO orders new coins or uses circulated inventory to meet demand. Figures 14 through 17 present annual data for net pay, inventory, and new coin orders for quarters, dimes, nickels, and pennies from 2009 through 2012. To manage the circulating coin inventory, Reserve Banks incur coin management costs and interbank transfer costs. The Reserve Banks’ coin management costs include direct costs and support costs. (See fig. 18.) Direct costs are generally personnel costs such as salaries and benefits and support costs include utilities, protection, facilities, information technology, and other local and national support functions. From 2009 through 2012, direct costs represented about 30 percent and support costs represented about 70 percent of total coin management costs. Interbank transfers are shipments of coins from one Reserve Bank office region to another to ensure demand is met. (See fig. 19.) From 2009 through 2012, the value of notes in circulation has increased by 27 percent from about $888 billion in 2009 to over $1,127 billion in 2012. (See fig. 20.) To effectively manage their inventory, private and governmental organizations involved in production and distribution operations use supply-chain and inventory-management practices. To assess the Federal Reserve’s management of the circulating coin inventory we identified five key supply-chain management practices: collaboration, risk management, performance metrics, forecasting demand, and system optimization. Establishing, documenting, and following these practices and their supporting characteristics contributes to a more effective inventory management system. In addition, these supply-chain management practices are interrelated—as activities in one area may have implications in another—and can be used to achieve efficiency improvements and cost savings. For example, collaborative working relationships can improve risk management practices because information related to disruptions and actions to minimize or mitigate disruptions are more easily shared across the system. To assess elements of the five key supply-chain management practices, we selected 14 supporting characteristics based on their relevance to coin inventory management. Based on our review, we determined whether the Federal Reserve’s management of the circulating coin inventory followed or substantially followed, partially followed, or minimally or not followed each supporting characteristic. Our assessment of the characteristics served as the basis for our overall assessment as to whether each key practice was followed or substantially followed, partially followed, or minimally or not followed. For example, if we found supporting evidence that two of the three characteristics of a practice were substantially followed but no evidence to support that the third characteristic was followed we would determine that the key practice was partially followed. Lorelei St. James, (202) 512-2834 or stjamesl@gao.gov. In addition to the individual named above, Teresa Spisak, Assistant Director; Amy Abramowitz; Douglas Anderson; Patrick Dudley; Lawrance Evans, Jr.; David Hooper; Delwen Jones; Sara Ann Moessbauer; Colleen Moffatt Kimer; Constance Ify Onyiah; Josh Ormond; Jennifer Schwartz; and Maria Wallace made key contributions to this report.
Efficiently managing the circulating coin inventory helps ensure that enough coins are available to meet public demand while avoiding unnecessary production and storage costs. The Federal Reserve fulfills the coin demand of the nation's depository institutions (e.g., commercial banks and credit unions) by managing Reserve Bank inventory and ordering new coins from the U.S. Mint. GAO was asked to review this approach. This report examines (1) how the Federal Reserve manages the circulating coin inventory and the related costs, (2) the extent to which the Federal Reserve follows key practices in managing the circulating coin inventory, (3) actions taken to respond to potential changes in demand for coins and notes, and (4) actions taken with regard to the circulation of the $1 coin. GAO interviewed federal and foreign officials, experts, and industry representatives; reviewed documents and data on coin inventories; and compared the Federal Reserve's coin inventory management practices to key practices in supply chain management. In 2009, the Federal Reserve centralized coin management across the 12 Reserve Banks, established national inventory targets to track and measure the coin inventory, and in 2011 established a contract with armored carriers that store Reserve Bank coins in their facilities. However, according to Federal Reserve data, from 2008 to 2012, total annual Reserve Bank coin management costs increased by 69 percent and at individual Reserve Banks increased at rates ranging from 36 percent to 116 percent. The Federal Reserve's current strategic plan calls for using financial resources efficiently and effectively and monitoring costs to improve cost-effectiveness. However, the agency does not monitor coin management costs by each Reserve Bank--instead focusing on combined national coin and note costs--thus missing potential opportunities to improve the cost-effectiveness of coin-related operations across Reserve Banks. In managing the circulating coin inventory, the Federal Reserve followed two of five key practices GAO identified and partially followed three. For example, the Federal Reserve follows the key practice of collaboration because it has established multiple mechanisms for sharing information related to coin inventory management with partner entities. The Federal Reserve has developed some performance metrics in the form of upper and lower national coin inventory targets. However, it has not developed other goals or metrics related to coin supply-chain management. One key practice is for agencies to identify goals, establish performance metrics, and measure progress toward those goals. Establishing goals and metrics, such as those related to coin management costs, could aid the Federal Reserve in using information and resources to identify additional efficiencies. To collect information on potential changes in the demand for currency (coins and notes), the Federal Reserve has conducted studies and outreach, including developing a long-term strategic framework beginning in 2010 to consider changes in demand and implications for operations. While the magnitude of potential changes in the demand for currency is inherently uncertain, the Federal Reserve anticipates a gradual decline in currency use, and officials reported such changes could likely be accommodated by the current system. While Federal Reserve studies and data indicate electronic payments have increased over time, currency usage has remained strong. For example, from 2009 to 2012, the value of currency in circulation rose about 26 percent. Starting in 2007, the Federal Reserve took actions to overcome barriers to circulation of the $1 coin, such as holding regular meetings with depository institution representatives to gather feedback about demand for $1 coins. The Federal Reserve manages the $1 coin inventory as it does for all other coin denominations--overseeing distribution and ensuring sufficient supply is available to meet demand nationwide. Reserve Banks currently hold approximately $1.4 billion in $1 coins, an amount that, according to the Federal Reserve, is sufficient to meet demand for more than 40 years. Reserve Bank officials, depository institution representatives, and coin terminal operators stated that $1 coins are readily available to the public throughout the country, but there is very low public demand for these coins. Among other things, the Federal Reserve should (1) develop a process to assess factors influencing coin operations costs and identify practices that could lead to cost-savings and (2) establish additional performance goals and metrics relevant to coin inventory management. The Federal Reserve generally agreed with the report's recommendations.
We defined the financial services industry to include the following sectors: depository credit institutions, which include commercial banks, thrifts (savings and loan associations and savings banks), and credit unions; holdings and trusts, which include investment trusts, investment companies, and holding companies; nondepository credit institutions, which extend credit in the form of loans and include federally sponsored credit agencies, personal credit institutions, and mortgage bankers and brokers; the securities sector, which is made up of a variety of firms and organizations (e.g., broker-dealers) that bring together buyers and sellers of securities and commodities, manage investments, and offer financial advice; and the insurance sector, including carriers and insurance agents that provide protection against financial risks to policyholders in exchange for the payment of premiums. The financial services industry is a major source of employment in the United States. EEO-1 data showed that financial services firms we reviewed for this work, which have 100 or more staff, employed over 3 million people in 2008. Moreover, according to the U.S. Bureau of Labor Statistics, employment in the financial services industry was expected to grow by 5 percent from 2008 to 2018. Employment in the credit intermediation and related activities industry, which includes banks, is expected to account for 42 percent of all new jobs within the finance and insurance sector. As discussed in our 2006 report, overall diversity in management-level positions did not change substantially from 1993 through 2004. Specifically, figure 1 shows that diversity in senior positions increased from 11.1 percent to 15.5 percent during that period. Regarding the change within specific groups, African-Americans increased their representation from 5.6 percent to 6.6 percent, Asians from 2.5 percent to 4.5 percent, Hispanics from 2.8 percent to 4.0 percent, and American Indians from 0.2 to 0.3 percent. Management-level representation by white women was largely unchanged at slightly more than one-third during the period, while representation by white men declined from 52.2 percent to 47.2 percent. Revised EEO-1 data for the period 2005 through 2008 show an increase in minority representation in management positions from 15.5 percent to 17.4 percent (fig. 2). This increase was largely driven by the growing representation of Asians in management positions—an increase of nearly a full percentage point from 4.7 percent to 5.5 percent during the period. Meanwhile, African-American representation remained stable at about 6.3 percent from 2005 through 2008, while Hispanic representation increased by half of a percentage point from 4.3 to 4.8 percent. Management-level representation by white women and white men both decreased by about one percentage point from 2005 through 2008. However, before 2008 EEO-1 data generally overstated representation levels for minorities and white women in the most senior-level positions, such as chief executive officers of large investment firms or commercial banks, because the category that captured these positions—”officials and managers”—covered all management positions. Thus, this category included lower-level positions (e.g., assistant manager of a small bank branch) that may have a higher representation of minorities and women. Recognizing this limitation, starting in 2007 EEOC revised its data collection form for employers to divide the “officials and managers” category into two subcategories: “executive/senior-level officers and managers” and “first/midlevel officials.” EEOC’s revised data, as reported in 2008, indicate that minorities accounted for 10 percent of senior positions in the financial services industry. As I discussed previously, the percentage in the broader data category was 17.4 percent. Moreover, as shown in figure 3, white men accounted for approximately 64 percent of senior-level management positions. In contrast, African Americans held 2.8 percent of such senior management positions, while Hispanics held 3.0 percent and Asians 3.5 percent. Officials from the firms that we contacted for our previous work said that their top leadership was committed to implementing workforce diversity initiatives but noted that making such initiatives work was challenging. In particular, the officials cited ongoing difficulties in recruiting and retaining minority candidates and in gaining employees’ “buy-in” for diversity initiatives, especially at the middle management level. Some firms noted that they had stepped up efforts to help ensure a diverse workforce. However, the recent financial crisis has raised questions about their ongoing commitment to initiatives and programs that are designed to promote workforce diversity. Minorities’ rapid growth as a percentage of the overall U.S. population, as well as increased global competition, convinced some financial services firms that workforce diversity was a critical business strategy. Since the mid-1990s, some financial services firms have implemented a variety of initiatives designed to recruit and retain minority and women candidates to fill key positions. Officials from several banks said that they had developed scholarship and internship programs to encourage minority students to consider careers in banking. Some firms and trade organizations had also developed partnerships with groups that represent minority professionals and with local communities to recruit candidates through events such as conferences and career fairs. To help retain minorities and women, firms have established employee networks, mentoring programs, diversity training, and leadership and career development programs. Industry studies have noted, and officials from some financial services firms we contacted confirmed, that senior managers were involved in diversity initiatives. Some of these officials also said that this level of involvement was critical to success of a program. For example, according to an official from an investment bank, the head of the firm meets with all minority and female senior executives to discuss their career development. Officials from a few commercial banks said that the banks had established diversity “councils” of senior leaders to set the vision, strategy, and direction of diversity initiatives. A 2007 industry trade group study and some officials also noted that some companies were linking managers’ compensation to their progress in hiring, promoting, and retaining minority and women employees. However, the study found that most companies reported that they still did not offer managers financial rewards for improving diversity performance. This study also found that firms, overall, have significantly increased accountability for driving diversity results. For example, more firms reported that they were holding managers accountable for improving diversity. Performance reviews and management-by-objectives were the top two methods for measuring managers’ diversity performance. Finally, firms whose representation of women and minorities was above the median for the survey group were considerably more likely to use certain diversity management strategies and practices. A few firms had also developed performance indicators to measure progress in achieving diversity goals. These indicators include workforce representation, turnover, promotion of minority and women employees, and employee satisfaction survey responses. Officials from several financial services firms stated that measuring the results of diversity efforts over time was critical to the credibility of the initiatives and to justifying the investment in the resources such initiatives demanded. While financial services firms and trade groups we contacted had launched diversity initiatives, officials from these organizations and other information suggested that several challenges may have limited the success of their efforts. These challenges include the following: Recruiting minority and women candidates for management development programs. Available data on minority students enrolled in Master of Business Administration (MBA) programs suggest that the pool of minorities, a source that may feed the “pipeline” for management-level positions within the financial services industry and other industries is a limiting factor. In 2000, minorities accounted for 19 percent of all students enrolled in MBA programs in accredited U.S. schools; in 2006, that student population had risen to 25 percent. Financial services firms compete for minorities in this pool not only with one another but also with firms from other industries. Fully leveraging the “internal” pipeline of minority and women employees for management-level positions. As shown in figure 4, there are job categories within the financial services industry that generally have more overall workforce diversity than the “Executive/Senior Level Officials & Managers” category, particularly among minorities. For example, minorities held almost 25 percent of “professional” positions in the industry in 2008, compared with 10 percent of “executive/senior level officials & managers” positions. According to a 2006 EEOC report, the professional category represented a possible pipeline of available management-level candidates. The EEOC report stated that the chances of minorities and women (white and minority combined) advancing from the professional category into management-level positions were lower than they were for white males. Retaining minority and women candidates that are hired for key management positions. Many industry officials said that financial services firms lack a critical mass of minority men and women, particularly in senior-level positions, to serve as role models. Without a critical mass, the officials said that minority or women employees might lack the personal connections and access to informal networks that are often necessary to navigate an organization’s culture and advance their careers. For example, an official from a commercial bank we contacted said he learned from staff interviews that African-Americans believed that they were not considered for promotion as often as others partly because they were excluded from informal employee networks needed for promotion or to promote advancement. Achieving the “buy-in” of key employees, such as middle managers. Middle managers are particularly important to the success of diversity initiatives because they are often responsible for implementing key aspects of such initiatives and for explaining them to other employees. However, some financial services industry officials said that middle managers may be focused on other aspects of their responsibilities, such as meeting financial performance targets, rather than the importance of implementing the organization’s diversity initiatives. Additionally, the officials said that implementing diversity initiatives represented a considerable cultural and organizational change for many middle managers and employees at all levels. An official from an investment bank told us that the bank had been reaching out to middle managers who oversaw minority and women employees by, for example, instituting an “inclusive manager program.” In closing, with the implementation of a variety of diversity initiatives over the past 15 years, diversity at the management level in the financial services industry has improved but not changed substantially. Further, EEOC’s new EEO-1 data provide a clearer view of diversity within senior executive ranks, showing that diversity is lower than the overall industry management diversity statistics had indicated. Initiatives to promote management diversity at all levels within financial services firms face several key challenges, such as recruiting and retaining candidates and achieving the “buy-in” of middle managers. The impact of the recent financial crisis on diversity also warrants ongoing scrutiny. Without a sustained commitment to overcoming these challenges, management diversity in the financial services industry may continue to remain largely unchanged over time. Mr. Chairman and Madam Chairwoman, this concludes my prepared statement. I would be pleased to respond to any questions you or other members of the subcommittees may have. For further information about this testimony, please contact Orice M. Williams Brown on (202) 512-8678 or at williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Wesley M. Phillips, Assistant Director; Emily Chalmers; William Chatlos; John Fisher; Simin Ho; Marc Molino; and Linda Rego. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As the U.S. workforce has become increasingly diverse, many private and public sector organizations have recognized the importance of recruiting and retaining minority and women candidates for key positions. However, previous congressional hearings have raised concerns about a lack of diversity at the management level in the financial services industry, which provides services that are essential to the continued growth and economic recovery of the country. The recent financial crisis has renewed concerns about the financial services industry's commitment to workforce diversity. This testimony discusses findings from a June 2006 GAO report (GAO-06-617), February 2008 testimony (GAO-08-445T), and more recent work on diversity in the financial services industry. Specifically, GAO assesses (1) what the available data show about diversity at the management level from 1993 through 2008 and (2) steps that the industry has taken to promote workforce diversity and the challenges involved. To address the testimony's objectives, GAO analyzed data from the Equal Employment Opportunity Commission (EEOC); reviewed select studies; and interviewed officials from financial services firms, trade organizations, and organizations that represent minority and women professionals. To the extent possible, key statistics have been updated. EEOC data indicate that overall diversity at the management level in the financial services industry did not change substantially from 1993 through 2008, and diversity in senior positions remains limited. In general, EEOC data show that management-level representation by minority women and men increased from 11.1 percent to 17.4 percent during that period. However, these EEOC data overstated minority representation at senior management levels, because the category includes mid-level management positions, such as assistant branch manager, that may have greater minority representation. In 2008, EEOC reported revised data for senior-level positions only, which showed that minorities held 10 percent of such positions compared with 17.4 percent of all management positions. The revised data also indicate that white males held 64 percent of senior positions in 2008, African-Americans held 2.8 percent, Hispanics 3 percent, and Asians 3.5 percent. Financial services firms and trade groups have initiated programs to increase workforce diversity, but these initiatives face challenges. The programs include developing scholarships and internships, partnering with groups that represent minority professionals, and linking managers' compensation with their performance in promoting a diverse workforce. Some firms have developed indicators to measure progress in achieving workforce diversity. Industry officials said that among the challenges these initiatives faced were recruiting and retaining minority candidates, and gaining the "buy-in" of key employees such as the middle managers who are often responsible for implementing such programs. Without a sustained commitment to overcoming these challenges, diversity at the management level may continue to remain generally unchanged over time.
The 13th Congressional District of Florida comprises DeSoto, Hardee, Sarasota, and parts of Charlotte and Manatee Counties. In the November 2006 general election, there were two candidates in the race to represent the 13th Congressional District: Vern Buchanan, the Republican candidate, and Christine Jennings, the Democratic candidate. The State of Florida certified Vern Buchanan the winner of the election. The margin of victory was 369 votes out of a total of 238,249 votes counted. Table 1 summarizes the results of the election and shows that the results from Sarasota County exhibited a significantly higher undervote rate than in the other counties in the congressional district. As seen in table 1, about 18,000 undervotes were reported in Sarasota County in the race for Florida’s 13th Congressional District. After the election results were contested in the House of Representatives, the task force met and unanimously voted to seek GAO’s assistance in determining whether the voting systems contributed to the large undervote in Sarasota County. On June 14, 2007, we met with the task force and agreed upon an engagement plan. We reported on the status of our review at an interim meeting held by the task force on August 3, 2007. On October 2, 2007, we reported that our analysis of election data did not identify any particular voting machines or machine characteristics that could have caused the large undervote in the Florida-13 race. The undervotes in Sarasota County were generally distributed across all machines and precincts. We found that some of the prior tests and reviews conducted by the State of Florida and Sarasota County provided assurance that certain components of the voting system in Sarasota County functioned correctly, but they were not enough to provide reasonable assurance that the iVotronic DREs did not contribute to the undervote. We proposed three tests—firmware verification, ballot, and calibration—to provide increased assurance, but not absolute assurance, that the iVotronic DREs did not cause the large undervote in Sarasota County. We stated that the successful conduct of the tests could reduce the possibility that the voting systems caused the undervote and shift attention to the possibilities that voters intentionally undervoted or voters did not properly cast their ballots on the iVotronic DRE, potentially because of issues relating to interaction between voters and the ballot. In the 2006 general election, Sarasota County used voting systems manufactured by ES&S. The State of Florida has certified different versions of ES&S voting systems. The version used in Sarasota County was designated ES&S Voting System Release 4.5, Version 2, Revision 2, and consisted of iVotronic DREs, a Model 650 central count optical scan tabulator for absentee ballots, and the Unity election management system. It was certified by the State of Florida on July 17, 2006. The certified system includes different configurations and optional elements, several of which were not used in Sarasota County. The election management part of the voting system is called Unity; the version that was used was 2.4.4.2. Figure 1 shows the overall election operation using the Unity election management system and the iVotronic DRE. Sarasota County used iVotronic DREs for early and election day voting. Specifically, Sarasota County used the 12-inch iVotronic DRE, hardware version 1.1 with firmware version 8.0.1.2. Some of the iVotronic DREs are configured to use audio ballots, which are often referred to as Americans with Disabilities Act (ADA) machines. The iVotronic DRE uses a touch screen—a pressure-sensitive graphics display panel—to display and record votes (see fig. 2). The machine has a storage case that also serves as the voting booth. The operation of the iVotronic DRE requires the use of a personalized electronic ballot (PEB), which is a storage device with an infrared window used for transmission of ballot data to and from the iVotronic DRE. The iVotronic DRE has four independent flash memory modules, one of which contains the program code—firmware—that runs the machine; the remaining three flash memory modules store redundant copies of ballot definitions, machine configuration information, ballots cast by voters, and event logs (see fig. 3). The iVotronic DRE includes a VOTE button that the voter has to press to cast a ballot and record the information in the flash memory. The iVotronic DRE also includes a compact flash card that can be used to load sound files onto iVotronic DREs with ADA functionality. The iVotronic DRE’s firmware can be updated through the compact flash card. Additionally, at the end of polling, the ballots and audit information are to be copied from the internal flash memory module to the compact flash card. To use the iVotronic DRE for voting, a poll worker activates the iVotronic DRE by inserting a PEB into the PEB slot after the voter has signed in at the polling place. After the poll worker makes selections so that the appropriate ballot will appear, the PEB is removed and the voter is ready to begin using the system. The ballot is presented to the voter in a series of display screens, with candidate information on the left side of the screen and selection boxes on the right side (see fig. 4). The voter can make a selection by touching anywhere on the line, and the iVotronic DRE responds by highlighting the entire line and displaying an X in the box next to the candidate’s name. The voter can also change his or her selection by touching the line corresponding to another candidate or by deselecting his or her choice. “Previous Page” and “Next Page” buttons are used to navigate the multipage ballot. After completing all selections, the voter is presented with a summary screen with all of his or her selections (see fig. 5). From the summary screen, the voter can change any selection by selecting the race. The race will be displayed to the voter on its own ballot page. When the voter is satisfied with the selections and has reached the final summary screen, the red VOTE button is illuminated, indicating the voter can now cast his or her ballot. When the VOTE button is pressed, the voting session is complete and the ballot is recorded on the iVotronic DRE. In Sarasota County’s 2006 general election, there were nine different ballot styles with between 28 and 40 races, which required between 15 and 21 electronic ballot pages to display, and 3 to 4 summary pages for review purposes. An election system is based upon a complex interaction of people (voters, election officials, and poll workers), processes (controls), and technology that must work effectively together to achieve a successful election. The particular technology used to cast and count votes is a critical part of how elections are conducted, but it is only one facet of a multifaceted election process that involves the interplay of people, processes, and technology. As we have previously reported, every stage of the election process— registration, absentee and early voting, preparing for and conducting Election Day activities, provisional voting, and vote counting—is affected by the interaction of people, processes, and technology. Breakdowns in the interaction of people, processes, and technology may, at any stage of an election, impair an accurate vote count. For example, if the voter registration process is flawed, ineligible voters may be allowed to cast votes. Poll worker training deficiencies may contribute to discrepancies in the number of votes credited and cast, if voter information was not entered properly into poll books. Mistakes in using the DRE systems could result from inadequate understanding of the equipment on the part of those using it. As noted in our October statement, we recognize that human interaction with the ballot layout could be a potential cause of the undervote, and we noted that several suggestions have been offered as possible ways to establish that voters are intentionally undervoting and to provide some assurance that the voting systems did not cause the undervote. For instance, A voter-verified paper trail could provide an independent confirmation that the touch screen voting systems did not malfunction in recording and counting the votes from the election. The paper trail would reflect the voter’s selections and, if necessary, could be used in the counting or recounting of votes. This issue was also recognized in the source code review performed by the Security and Assurance in Information Technology (SAIT) laboratory at Florida State University as well as the 2005 and draft 2007 Voluntary Voting Systems Guidelines prepared for the Election Assistance Commission. We have previously reported on the need to implement such a function properly. Explicit feedback to voters that a race has been undervoted and a prompt for voters to affirm their intent to undervote might help prevent many voters from unintentionally not casting a vote in a race. On the iVotronic DREs, such feedback and prompts are provided only when the voter attempts to cast a completely blank ballot, but not when a voter fails to vote in individual races. Offering a “none of the above” option in a race would provide voters with the opportunity to indicate that they are intentionally undervoting. For example, the State of Nevada provides this option in certain races in its elections. We reported that decisions about these or other suggestions about ballot layout or voting system functions should be informed by human factors studies that assess such measures’ effectiveness in accurately recording voters’ preferences, making voting systems easier to use, and preventing unintentional undervotes. We previously reported that having reasonable assurance that all iVotronic DREs that recorded votes in the 2006 general election were running the same certified firmware would allow us to have more confidence that the iVotronic DREs will behave similarly when tested. Consequently, if we are reasonably confident that the same firmware was running in all 1,499 machines, then we are more confident that the results of other tests, conducted both by GAO and by others, on a small number of machines can be used to obtain increased assurance that the iVotronic DREs did not cause the undervote. We also reported that there was a lack of assurance that the source code that was held in escrow by the Florida Division of Elections and that was previously reviewed by Florida State University and by us, if rebuilt, would corresponded to the firmware that was certified and held in escrow by the Florida Division of Elections. We found that the firmware on a statistically selected sample of 115 iVotronic DREs was the same as that certified by the Florida Division of Elections. We also found that the escrowed source code, when rebuilt into executable firmware, corresponded to the 8.0.1.2 firmware that was certified by the Florida Division of Elections. Our methodology to obtain reasonable assurance that the firmware used on Sarasota County’s iVotronic DREs during the 2006 general election was the same as that certified by the State of Florida was broken down into two basic steps: (1) selecting a representative sample of machines, and (2) verifying that the firmware extracted from the voting machines was the same as the escrowed firmware that had been certified by the Florida Division of Elections. Appendix I details the methodology for selecting the representative sample of machines. Appendix II contains a list of the serial numbers of the tested iVotronic DREs. To ensure that we would be testing with the iVotronic firmware certified by the Florida Division of Elections, on October 18, 2007, we and officials from the Florida Division of Elections made two copies of the escrowed iVotronic 8.0.1.2 firmware on compact discs (CD) and placed them in two tamper-evident bags with serial numbers. The bags were subsequently hand-delivered by a Florida Division of Elections official for our use in the firmware verification test and for the rebuilding of the firmware from the source code. In order to extract the firmware from an iVotronic DRE, the machine was placed on an anti-static mat and the case was opened using a special screwdriver. After lifting the case, a special extraction tool was used to remove the flash memory module that contains the firmware. The flash memory module was then inserted in the socket of a Needham Electronics’ EMP-300 device that was connected to the universal serial bus (USB) port of a personal computer (PC). The EMPWin application running on that PC was used to read the firmware from the flash memory module and save the extracted firmware on the PC. The Florida Division of Elections loaned us the EMP-300 and EMPWin application for use in extracting firmware from the flash memory module. To compare the extracted firmware with the escrowed version, we relied on two commercially available software programs. First, we acquired a license for PrestoSoft’s ExamDiff Pro software that enables comparison of files. The ExamDiff Pro software is a commercially available program designed to highlight the differences between two files. For each selected iVotronic DRE, the extracted firmware was compared with the escrowed version with any differences highlighted by the program. Second, to further ensure that the extracted firmware matched the escrowed firmware, we compared the SHA-1 hash value of the extracted firmware to the hash value of the comparable certified firmware. We computed the SHA-1 hash by using the Maresware hash software that was provided by the Florida Division of Elections. In order to ensure that the commercial Maresware hash software properly calculated the SHA-1 hash value, we (1) created four files and obtained a fifth file that contained executable code, (2) obtained hash values for each file by either using an external program that generated the hash values using the same hashing algorithm as the commercial product or using known hash values, and (3) used the commercial program acquired for testing the firmware to ensure that the hash values it generated for these five files were identical to the expected hash values for those files. In each case, the hash values generated by the commercial program were identical to the expected values. Accordingly, reasonable assurance for the purposes of our review was obtained that the commercial program produced its hash values in accordance with the NIST algorithm. At the end of each day, we (1) used the commercial Maresware software to compute hash values for each of the firmware programs that had been unloaded during that day and all previous days, and (2) compared each hash created by this program to the expected value that was calculated from the firmware that had been escrowed by the Florida Division of Elections. This comparison provided further assurance that the extracted firmware was (1) identical to the version escrowed by the Florida Division of Elections when the hashes agreed, or (2) different if the hashes did not agree. We also verified that sequestered machines were not used since the 2006 general election. For each of these sequestered machines, we used an audit PEB to copy the audit logs onto a compact flash card and then used the Unity election reporting manager to generate event log reports. We examined the event logs for the date and time of occurrence of activities that would indicate whether the machine had been used. Lack of such activities since the 2006 general election provided reasonable assurance that the machines had not been used since they were sequestered. In addition, to verify that the source code for iVotronic DRE firmware version 8.0.1.2 previously examined by the Florida State University SAIT source code review team and by GAO corresponded with the version certified by the Florida Division of Elections, ES&S officials stated that it still had the development environment that could be used to compile, or rebuild, the certified firmware from the source code retained in escrow by the Florida Division of Elections. As we previously noted, a software review and security analysis of the iVotronic DRE firmware was conducted by a team led by Florida State University’s SAIT laboratory. The software review team attempted to confirm or refute many different hypotheses that, if true, might explain the undervote in the race for the 13th Congressional District. In doing so, they made several observations about the source code, which we were able to independently verify. The rebuilding of the firmware was conducted by ES&S at its Rockford, Illinois, facility on November 19, 2007, and witnessed by us. Prior to the rebuild, the Florida Division of Elections provided an unofficial copy of the source code to ES&S so that ES&S could prepare the development environment and test the rebuild steps. Using the official sealed copy of the source code CD, ES&S rebuilt the firmware in front of GAO representatives. ES&S described the development environment and we inspected it to satisfy ourselves that the firmware was faithfully rebuilt using the escrowed source code. After the rebuilding of the firmware, the certified version of 8.0.1.2 firmware was compared with the rebuilt version using PrestoSoft’s ExamDiff Pro. While the Florida audit team had previously confirmed that the firmware running on six iVotronic DREs matched the certified version held in escrow by the Florida Division of Elections, we found that the sample size was too small to support generalization to all 1,499 iVotronic DREs that recorded votes during the 2006 general election. Accordingly, we conducted a firmware verification test on a statistically valid sample of 115 iVotronic DRE machines used by Sarasota County during the 2006 general election. The selected machines fell into two groups—machines that had not been used since the 2006 general election (referred to as sequestered machines) and machines that had been used in subsequent elections. For each machine, we extracted the firmware from a flash memory module in that machine and then compared the extracted firmware with the escrowed version using commercially available file comparison tools to determine whether they agreed. We found that the firmware installed in the flash memory module of each machine matched the escrowed firmware that had been certified by Florida. The statistical approach used to select these machines lets us estimate with a 99 percent confidence level that at least 1,439, or 96 percent, of the 1,499 machines used in the 2006 general election used the firmware that was certified by the State of Florida. We witnessed the rebuild of the iVotronic DRE’s firmware from the source code that was held in escrow by the Florida Division of Elections and that was previously reviewed by Florida State University and by us. At ES&S’s software development facility, we observed that rebuilding the firmware from the escrowed source code resulted in the same firmware that was certified and held in escrow by the Florida Division of Elections. The comparison of the escrowed firmware to the version that was rebuilt by the vendor identified no differences and provides us reasonable assurance that the escrowed firmware corresponded to the escrowed source code. The successful rebuilding of the firmware from the escrowed source code enables us to have greater confidence in the conclusions derived from prior source code reviews by Florida State University and us. In our October 2007 statement, we noted that there were 112 common ways a voter may interact with the system to select a candidate in the Florida-13 race and cast the ballot, and that prior testing of the iVotronic DREs covered only 13 of these 112 possible ways. We developed 224 test ballots to verify that the iVotronic DRE could accurately capture ballots using each of these 112 common ways a voter may interact with the system; 112 test ballots were cast on one machine configured for early voting, and another 112 ballots were cast on nine machines configured for election day voting. Our tests showed that for each of the 224 test ballots, the iVotronic DRE correctly captured each vote as cast for the Florida-13 race. We also conducted firmware verification tests on these machines and verified that they were running the certified firmware. The methodology for ballot testing can be broken into two major areas— development of the test ballots and execution of the test using those ballots. The following sections discuss these areas. In examining how the system allowed voters to make a selection in the Florida-13 race, we found at least 112 different ways a voter could make his or her selection and cast the ballot in the Florida-13 race, assuming that it was the only race on the ballot. Specifically, a voter could (1) initially select either candidate or neither candidate (i.e., undervote), (2) change the vote on the initial screen, and (3) use a combination of features to change or verify his or her selection by using the page back and review screen options. Accordingly, we tested these 112 ways to select a candidate on the early voting machine and on the election day machines (224 test ballots in total). The 112 standard test ballots cover all combinations of the following types of voter behavior: Voter makes selection on the initial ballot screen and makes no changes or takes any other action to return to the contest to review or change selection. Voter makes selection on the initial ballot screen and decides before leaving that screen to change the selection because of an error in selecting the candidate or for some other reason. Voter makes selection on the initial ballot screen and then decides to use the page back option to review or change selection. Voter makes selection on the initial ballot screen and continues to the review screen and then decides to use the review screen option to review or change selection. Voter makes selection on the initial ballot screen and uses a combination of page back and review screen options to review or change selection. In each instance where a selection could be made, three choices were possible for the Florida-13 race: a selection for one of the two candidates, or no selection (i.e., an undervote). In developing the standard test ballots, we did not consider all combinations of some other types of voter behavior that would have significantly increased the number of test cases without providing significant benefits. In most cases, such behavior are variants of the primary voter behavior that we examined. The following are examples of voter behavior that were not included in the standard test set in order to reduce the number of test cases to practicable levels: Using a one-touch or two-touch method to make changes on a ballot page. Varying the number of pages a voter may go back (“page backs”) to return to the page containing the Florida-13 race to change or review a selection. Casting a ballot from the review screen selection. The VOTE button is not activated until the voter reaches the last review screen. However, once the VOTE button has been activated, a ballot may be cast from any screen. For example, a voter may activate the VOTE button and then return to a contest to review or change the selection using the review screen option. Once the voter goes to the contest from the review screen and makes any desired changes, the voter can then cast the ballot from that screen rather than going back to the last page of the review screen or even the review screen that was used to return to the selection. Although we did not consider all combinations of these types of voter behavior when developing the standard test ballots, we included some of these user interactions in the execution of applicable test ballots to provide increased assurance that the system would handle these voter behaviors. For each applicable test ballot, we randomly determined the test procedure that should be used for the following attributes: Initial change method – The standard test ballots address voters making changes on the initial ballot screen. Where possible, the method used to change (one-touch or two-touch) the selection was randomly selected. Number of page backs – The ballots used by Sarasota County included the page back function. After reviewing the ballots, it appeared reasonable to expect that voters who may have used the page back option would probably decide that they had missed the race by the time they went one or two pages beyond the page with the Florida-13 race. Therefore, when a standard test ballot contained a page back requirement, the number of page backs was randomly selected to determine whether one or two page backs should be used. Page back change method – Some test ballots required a change after the page back option was selected. As with the initial change method, where possible, the method of changing (one-touch or two-touch) the selection was randomly assigned. Review screen change method – The system displays a review screen that shows the voter’s selections (or lack of selections) after the voter has progressed through all contests. On the review screen, the voter can select a race to go directly to that contest and (1) review the selection made, and (2) make any desired corrections. The standard test ballots were designed to cover this type of event. Where possible, the method used to make the change (one-touch or two-touch) was randomly selected. Activate VOTE button and cast ballots from the review screen – In order to test casting ballots from locations other than the last review screen, the VOTE button must be activated prior to going to a screen where the ballot is cast. In order to determine which test ballots should be used for this test, a two-step approach was adopted. First, a random selection of the ballots that use the review screen option was made to determine which test ballots should have the VOTE button activated. Then a random selection of these test ballots was made to determine whether the ballot should be cast from the review screen selection. Besides those attributes that directly affect the selection in the Florida-13 race, we varied the other attributes on the ballot in order to complete the ballot test. For each of the 224 test ballots, we used random values for other attributes, including the following: Ballot style – Each ballot was randomly assigned one of the nine ballot styles used in the election. Write-in candidate – All ballot styles includes write-in options in at least 2 races —United States Senate and State Governor/Lieutenant Governor. To verify that the iVotronic DRE accurately recorded the selection in the Florida-13 race for each test ballot, we needed a way to identify each test ballot in the ballot image log. To accomplish this, we randomly selected one of these two races, selected the write-in candidate for the race, and entered a unique value (i.e., the test ballot number) in the write-in field. Candidates and selections in other races on the ballot – Each ballot style had between 28 and 40 contests on the ballot. The values for the contests besides the Florida-13 race and the write-in field were also randomly selected. For example, most items had three possible choices—candidate 1 (or Yes), candidate 2 (or No), and undervote. Which of these three values was used for a given contest was randomly determined. The values used for these attributes were independently determined for the election day and early voting test ballots. For example, Test Ballot 2 (election day) and Test Ballot 202 (early voting) were designed to test the same standard condition described by one of the 112 standard test ballots. Table 2 illustrates some of the similarities and differences between the two test ballots that result from the random selection process used to determine the other aspects of the ballot. Finally, we selected 10 random machines to be used for the ballot testing. One machine was selected from those that were used in early voting in the 2006 general election. The other nine were selected from those that used each of the ballot styles on election day in the 2006 general election. For each election day machine, the assigned precinct was the same as the precinct where the machine was used during the 2006 general election. For the early voting machine, we needed to assign precincts for each ballot style. We used the precinct associated with the back-up machine used for election day testing as the precinct for that ballot style. If the first back-up machine was assigned the same precinct number as the primary election day machine, then we used the precinct associated with the second back-up machine. This approach was taken to maximize the number of precincts used in the testing efforts. A two-person test team conducted the ballot testing. One tester read out aloud the steps called for in the test ballot while the other tester performed those actions. In order to ensure that all of the actions relating to the Florida-13 congressional race were performed as laid out in the test ballots, a two-person review team observed a video display of the test and compared the actions taken by the tester to those called for in the test ballot. Furthermore, after the testing was completed, another team reviewed the video recording of these tests to validate that the actions relating to the Florida-13 contest taken by the tester were consistent with those called for by the test ballots. The criteria used to determine whether the test produced the expected result was derived from the Florida Voting System Standards. Specifically, among other things, these standards require the system to allow the voter to (1) determine whether the inputs given to the system have selected the candidates that he or she intended to select, (2) review the candidate selections made by the voter, and (3) change any selection previously made and confirm the new selection prior to the act of casting the ballot. Furthermore, the system must communicate to the voter the fact that the voter has failed to vote in a race (undervote) and require the voter to confirm his or her intent to undervote before casting the ballot. During the ballot test, the actual system response was compared to the expected results by a review team and after the testing was completed another review team compared the video records to the test ballots to validate that the tests had been performed in accordance with test scripts for the Florida-13 contest. At the beginning of testing on each iVotronic DRE, the machine was opened for voting and a zero tape was printed. After the casting of all test ballots on the machine, the machine was closed and a results tape was printed. The closing of the machine also writes the audit data to the compact flash card, including event data and ballot images. We examined the results tapes and compared the total votes cast for the Florida-13 contest against what was expected from the test ballots. We also kept track of the total number of ballots handled by the machine, called the “protective count” of an iVotronic DRE, before and after the test and confirmed that the increase in protective count matched the number of test ballots cast on that machine. Using the Unity election reporting manager, we read the compact flash cards and processed the audit data on each ballot test machine. We generated the ballot image log and examined the individual test ballots in the ballot image log. We looked for the unique identifier that was used for each test ballot and then confirmed that the ballot image reflected the correct selection for the Florida-13 race as called for by the test ballot. For example, the test script for Test Ballot 1 required the tester to (1) select a write-in candidate for U.S. Senate and (2) enter the value of “TB1” in the write-in field. Because only this test ballot used this value, we could review the ballot image log to determine what selection the voting machine recorded for the Florida-13 contest for the ballot showing “TB1” as the write-in candidate for U.S. Senate. Finally, using the process discussed previously for firmware testing, the firmware on all machines used for ballot testing was validated to ensure these machines used the same firmware that had been certified by the Florida Division of Elections. After executing the ballot tests on the election day and early voting machines, we found that all 10 iVotronic DREs captured the votes for the Florida-13 race on the test ballots accurately. We used a unique identifier in a write-in field in each test ballot and verified that the iVotronic DRE accurately captured the tester’s final selections in the Florida-13 race for each test ballot. Testing 112 ways to select a candidate on a single machine also provided us some additional assurance that the volume of ballots cast on election day did not contribute to the undervote. We noted that casting 112 ballots on a single machine was more than the number of ballots cast on over 99 percent of the 1,415 machines used on election day. Because little was known about the effect of a miscalibrated machine on the behavior of an iVotronic DRE, we deliberately miscalibrated two iVotronic DREs using 10 different miscalibration methods to verify the functioning of the machine. Although the miscalibration made the machine more difficult to use, the 39 ballots used in this test confirmed that the system correctly recorded the displayed vote for the Florida-13 contest and did not appear to contribute to the undervote. For the calibration testing, we judgmentally selected five different miscalibration patterns and repeated each pattern twice—once with a small amount of miscalibration and the second time with a large amount of miscalibration. The amount of miscalibration was also subjective— roughly 0.25 to 0.5 inch for a small amount and about 0.7 to 1 inch for a large miscalibration. The miscalibration patterns are shown in the following figures. We conducted calibration testing on two different machines that were used for ballot testing. As with ballot testing, at the beginning of testing of each machine, we opened the machine for voting and printed a zero tape. During the opening process, we calibrated the machine with one of the miscalibration patterns. After the machine was miscalibrated, we then executed at least three of the test ballots that were used during ballot testing on that machine for each test. The test ballots were rotated among the miscalibration patterns. For example, one of the machines had eight different ballot test scripts. The first three were used on one miscalibration pattern, the next three on another miscalibration pattern, and the final two plus the first one would be used on another miscalibration pattern. After the ballots were cast for one miscalibration pattern, the machine would be miscalibrated with another pattern. After the needed miscalibration patterns were tested on a machine, the iVotronic DRE was closed and a results tape was printed. The closing of the iVotronic DRE also wrote the audit data to the compact flash card. During the testing, the tester was instructed to take whatever actions were necessary to achieve the desired result. For example, if the script called for the selection of Candidate A, then the tester would keep touching the screen until Candidate A was selected. A review team monitored the testing to ensure that (1) the proper candidate for the Florida-13 congressional race was ultimately selected and (2) the review screen showed this candidate selection when it was first presented. As with the ballot test, we used the Unity election reporting manager to read the compact flash cards and processed the audit data or each ballot test machine. We generated the ballot image log and examined the individual test ballots in the ballot image log. We looked for the unique identifier that was used for each test ballot and then confirmed that the ballot image reflected the correct selection for the Florida-13 race as called for by the test ballot. After the testing had been completed, the expected results shown in the test ballot scripts were compared to the actual results contained in the ballot image log and the results tape using the same process discussed in the ballot testing methodology. The 39 ballots used in this test confirmed that the system correctly recorded the displayed vote for the Florida-13 contest. We also noted that the miscalibration clearly made the machines harder to use and during an actual election these machines would have probably been either recalibrated or removed from service once the voter brought the problem to the precinct’s attention, according to a Sarasota County official who observed the tests. Figure 11 shows an example of effects of our miscalibration efforts on the screen that is used to confirm the calibration results. Specifically, the stylus points to where the tester is touching the screen while the “X” on the screen shows where the machine indicated the stylus was touching the screen. In a properly calibrated machine, the stylus and the “X” are basically at the same point. Figure 12 shows an example of where the tester is touching the screen to make a selection and how this “touch” is translated into a selection. As can be seen, the finger making the selection is touching a position that in a properly calibrated machine would not result in the selection shown. However, the machine clearly shows the candidate selected and our tests confirmed that for the 39 ballots tested, the candidate actually shown by the system as selected (in this example, the shaded line) was the candidate shown on the review screen, as well as the candidate that received the vote when the ballot was cast. Our tests showed that (1) the firmware installed in a statistically selected sample of machines used by Sarasota County during the 2006 general election matched the firmware certified by the Florida Division of Elections, and we confirmed that when the manufacturer rebuilt the iVotronic 8.0.1.2 firmware from the escrowed source code, the resulting firmware matched the certified version of firmware held in escrow, (2) the machines properly displayed, recorded, and counted the selections for all test ballots cast during the ballot testing involving the 112 common ways a voter may interact with the system to cast a ballot for the Florida-13 race, and (3) the machines accurately recorded the test ballots displayed on deliberately miscalibrated machines. The results of these tests did not identify any problems that would indicate that the iVotronic DREs were responsible for the undervote in the Florida-13 race in the 2006 general election. As we noted when we proposed these tests, even after completing these tests, we do not have absolute assurance that the iVotronic DREs did not play any role in the large undervote. Absolute assurance is impossible to achieve because we are unable to recreate the conditions of the election in which the undervote occurred. Although the test results cannot be used to provide absolute assurance, we believe that these test results, combined with the other reviews that have been conducted by Florida, GAO, and others, have significantly reduced the possibility that the iVotronic DREs were the cause of the undervote. At this point, we believe that adequate testing has been performed on the voting machine software to reach this conclusion and do not recommend further testing in this area. Given the complex interaction of people, processes, and technology that must work effectively together to achieve a successful election, we acknowledge the possibility that the large undervote in Florida’s 13th Congressional District race could have been caused by factors such as voters who intentionally undervoted, or voters who did not properly cast their ballots on the iVotronic DRE, potentially because of issues relating to interaction between voters and the ballot. We provided draft copies of this statement to the Secretary of State of Florida and ES&S for their review and comment. We briefed the Sarasota County Supervisor of Elections on the contents of this statement and asked for their comments. The Florida Department of State provided technical comments, which we incorporated. ES&S and the Sarasota County Supervisor of Elections provided no comments. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Task Force may have at this time. For further information about this statement, please contact Naba Barkakati at (202) 512-6412 or barkakatin@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this statement include James Ashley, Stephen Brown, Francine Delvecchio, Cynthia Grant, Geoffrey Hamilton, Richard Hung, Douglas Manor, John C. Martin, Jan Montgomery, Daniel Novillo, Deborah Ortega, Keith Rhodes, Sidney Schwartz, Patrick Tobo, George Warnock, and Elizabeth Wood. We also appreciate the assistance of the House Recording Studio in the video recording of the tests. Each of the three tests—firmware verification, ballot, and calibration— was conducted on a sample of the 1,499 iVotronic DREs that recorded votes during the 2006 general election in Sarasota County, Florida. We selected 115 iVotronic DREs for the firmware test, 10 for the ballot test, and 2 for the calibration test. Appendix II contains the serial numbers of the iVotronic DREs that were tested. We selected a stratified random probability sample of iVotronic DREs from the population of 1,499. The sample was designed to allow us to generalize the results of the firmware sample to the population of iVotronic DREs used in this election. We stratified the population into two strata based on whether the machines had been sequestered since the 2006 general election. There were a total of 818 machines that were sequestered and 681 machines that had been used in subsequent elections. The population and sample are described in table 3. We calculated the sample size in each stratum using the hypergeometric distribution to account for the relatively small populations in each stratum. We determined each sample size to be the minimum number of machines necessary to yield an upper bound of 7.5 percent, at the 99 percent confidence level, if we observed zero failures in the firmware test. Assuming that we found no machines using an uncertified firmware version, these sample sizes allowed us to conclude with 99 percent confidence that no more than 7.5 percent of the machines in each stratum were using uncertified firmware. Further, this sample allowed us to conclude that no more than 4 percent of the 1,499 iVotronic DREs were using uncertified firmware, at the 99 percent confidence level. An additional five sequestered machines and five non-sequestered machines were selected as back-up machines should there be problems in locating the selected machines or some other problem that prevented testing them. We randomly selected a total of 10 machines from the population of 1,384 machines that were not selected in the firmware test sample. This sample size is not sufficient to allow us to make direct generalizations to the population. However, if we are reasonably confident that the same software is used in all 1,499 machines, then we are more confident that the results of the other tests on a small number of machines can be used to obtain increased assurance that the iVotronic DREs did not cause the undervote. We randomly selected one machine from each of the nine ballot styles used during the general election and one machine from the machines used for early voting. In case of problems in operating or locating the machines, we also selected randomly selected two additional machines for each ballot style and for early voting. The two iVotronic DREs selected for calibration testing were selected from those tested in the ballot test. Because the machines used for the ballot tests included an ADA machine and “standard” machines, we selected one of each for calibration testing. Although we did not test the ADA capabilities of the ADA machine (e.g., the audio ballots), we found that the on-screen appearance of selections on the ADA machine differed slightly from that on non-ADA machines. For example, the standard non- ADA machine displayed a blue bar across the screen and an X in the box next to the candidate’s name when a selection was made, while an ADA machine only showed an X in the box next to the candidate’s name. Table 4 table lists the iVotronic DREs that were tested by GAO. For each machine, the table shows whether the machine was sequestered and what type of testing was conducted on the machine.
In November 2006, about 18,000 undervotes were reported in Sarasota County in the race for Florida's 13th Congressional District (Florida-13). After the election results were contested in the House of Representatives, the task force unanimously voted to seek GAO's assistance in determining whether the voting systems contributed to the large undervote in Sarasota County. In October 2007, GAO presented its findings on the review of the voting systems and concluded that while prior tests and reviews provided some assurance that the voting systems performed correctly, they were not enough to provide reasonable assurance that the voting systems in Sarasota County did not contribute to the undervote. GAO proposed that a firmware verification test, a ballot test, and a calibration test be conducted. The task force requested that GAO proceed with the proposed additional tests. GAO also verified whether source code escrowed by Florida could be rebuilt into the firmware used in Sarasota County. To conduct its work, GAO conducted tests on a sample of voting systems used in Sarasota County during the 2006 general election. GAO witnessed the rebuild of the firmware from the escrowed source code at the manufacturer's development facility. GAO reviewed test documentation from Florida, Sarasota County, and the voting system manufacturer and met with election officials to prepare the test protocols and detailed test procedures. GAO conducted three tests on the iVotronic Direct Recording Electronic (DRE) voting systems in Sarasota County and these tests did not identify any problems. Based on its testing, GAO obtained increased assurance that the iVotronic DREs used in Sarasota County during the 2006 general election did not contribute to the large undervote in the Florida-13 contest. Although the test results cannot be used to provide absolute assurance, GAO believes that these test results, combined with the other reviews that have been conducted by the State of Florida, GAO, and others, have significantly reduced the possibility that the iVotronic DREs were the cause of the undervote. GAO's firmware verification test showed that the firmware installed in a statistically selected sample of 115 machines used by Sarasota County during the 2006 general election matched the firmware certified by the Florida Division of Elections. The statistical approach used in selecting these machines lets GAO estimate with a 99 percent confidence level that no more than 60 of the 1,499 iVotronic DREs that recorded votes in the 2006 general election were using different firmware. Consequently, GAO is able to place more confidence in the results of other tests conducted on a small number of machines by GAO and by others, which indicated that the iVotronic DREs did not cause the undervote. GAO also confirmed that when the manufacturer rebuilt the iVotronic DRE firmware from the source code that was held in escrow by the Florida Division of Elections and previously reviewed by GAO and others, the resulting firmware matched the version certified by the Florida Division of Elections. For the ballot test, GAO cast predefined test ballots on 10 iVotronic DREs and confirmed that each ballot was displayed and recorded accurately. GAO conducted the calibration test by miscalibrating two iVotronic DREs and casting ballots on them to validate that the machines recorded the information that was displayed on the touch screen. Based on the results of the ballot and calibration tests, GAO found that (1) the machines properly displayed, recorded, and counted the selections for all test ballots cast during ballot testing involving 112 common ways a voter may have interacted with the system, and (2) the deliberately miscalibrated machines, though difficult to use, accurately recorded the ballot selections as displayed on screen. At this point, GAO believes that adequate testing has been performed on the voting machine software and does not recommend further testing in this area. Given the complex interaction of people, processes, and technology that must work effectively together to achieve a successful election, GAO acknowledges the possibility that the large undervote in Florida's 13th Congressional District race could have been caused by factors such as voters who intentionally undervoted, or voters who did not properly cast their ballots on the iVotronic DRE, potentially because of issues relating to interaction between voters and the ballot.
Originally, SSNs were used to keep track of earnings, contributions, and old-age, disability, and survivor benefits for people covered by the Social Security program. Increasingly, however, SSNs have been used for a wide variety of purposes by private firms and federal, state, and local governments. Cases in which ineligible foreign-born individuals have obtained or used SSNs to secure employment and receive Social Security benefits, and increasing incidents of identity theft, have focused attention on the need to prevent abuse of SSNs. An estimated 96 percent of workers in the U.S., including many foreign- born citizens and noncitizens, are required to pay Social Security payroll taxes, also called Federal Insurance Contribution Act (FICA) taxes. When workers pay Social Security taxes, they earn up to 4 coverage credits each year. Generally 40 credits—equal to at least 10 years of work—entitle workers to Social Security benefits when they reach retirement age. Social Security benefits are based on workers’ covered earnings during their career. Different requirements apply in cases where workers become disabled or die with relatively short work careers. Although the Social Security Act provides that people meeting the work and contribution requirements accrue benefits, the act also generally prohibits payment of benefits to people who are not lawfully present in the U.S. as specified by DHS regulations. In fiscal year 2005, SSA assigned about 1.1 million original SSNs to noncitizens, representing about one-fifth of the 5.4 million original SSNs issued that year. Fewer than 15,000 of these were cards with “nonwork” SSNs issued to noncitizens unauthorized to work in the U.S. under immigration law. SSA also issues replacement cards to people who have already been assigned SSNs, but have lost their card. In fiscal year 2005, SSA issued over 800,000 such replacement cards to noncitizens, about 7 percent of the 12.1 million replacement cards issued that year. SSA uses different procedures for assigning SSNs depending on whether individuals are born in the U.S. or are foreign-born and depending on their citizenship or immigration status. Almost all individuals born in the U.S. or in U.S. jurisdictions are U.S. citizens at birth; however, some foreign-born individuals can also become U.S. citizens if their adoptive or birth parents are U.S. citizens or if they become naturalized citizens. Because foreign-born citizens are eligible for the same types of SSNs and benefits as U.S.-born citizens, we will focus for the remainder of this testimony on noncitizens. Immigrants are noncitizens who may lawfully reside and work permanently in the U.S. About 600,000 to 1.1 million noncitizens come to the US each year as immigrants. Nonimmigrants include noncitizens who come to the U.S. lawfully (for example, with temporary visas) and those who reside in the U.S. unlawfully (in violation of the Immigration and Nationality Act). Nonimmigrants who remain in the U.S. without DHS authorization overstay their nonimmigrant visas or enter the country illegally. As of 2004, an estimated 10.3 million unauthorized noncitizens lived in the U.S., according to the Pew Hispanic Center’s analysis of March 2004 Current Population Survey and other Census data. The State Department, DHS, SSA, and employers have responsibilities to help ensure that noncitizens who are not authorized to work are denied employment. The State Department identifies who among people abroad seeking to come to the U.S. is eligible to enter the U.S. and who is eligible to work in the U.S. DHS denies entry to people who are ineligible and enforces immigration requirements in cases where people enter the U.S. illegally or work without authorization. In cooperation with the State Department and DHS, SSA assigns SSNs to eligible noncitizens. Employers are required to inspect employees’ work authorization documents. A regular Social Security card can be one of the key documents that employers use to verify that employees are authorized to work. In some cases, however it may be difficult to distinguish a regular card from a nonwork card. Specifically, cards for individuals not eligible to work did not contain the restriction “NOT VALID FOR EMPLOYMENT” until May 1982. Finally, the SSA ensures that benefit payments go only to people who have earned them and who are lawfully present in the U.S. or in another country with which the U.S. has an agreement for reciprocal cross border payment of benefits. The Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) was enacted in response to the terrorist attacks of September 11, 2001 to reform our nation’s intelligence community and strengthen terrorism prevention and prosecution, border security, and international cooperation and coordination. IRTPA included several specific provisions for strengthening the SSN enumeration process and documentation requirements for obtaining SSNs and cards. For example, the act required minimum standards for birth certificates and directed the Department of Health and Human Services to establish these standards in consultation with DHS, SSA, and others. IRTPA also required that SSA limit the number of replacement cards it issues annually; adopt measures to improve verification of documents presented to obtain an original or replacement Social Security card; independently verify any birth record presented to obtain an SSN; prevent the assignment of SSNs to unnamed children and adopt additional measures to prevent assignment of multiple SSNs to the same child; form an interagency taskforce to establish standards to better protect Social Security cards and SSNs from counterfeiting, tampering, alteration, and theft; and provide for implementation of security requirements by June 2006. The Social Security Protection Act (SSPA) of 2004 imposed new restrictions on the payment of Social Security benefits to noncitizens. Before these provisions went into effect, all payments into the system would count toward insured status, regardless of whether or not the noncitizen was authorized to work by DHS. Under this new law, noncitizens who apply for benefits with an SSN originally assigned after 2003 must have work authorization at the time their SSN is assigned or at some later point before applying for benefits to gain insured status under the Social Security program. If the individual never had authorization to work in the United States, none of his or her earnings would count toward insured status and neither the worker nor dependent family members could receive Social Security benefits. SSA also has specific procedures to award benefits for foreign-born workers who work in both the U.S. and in another country with which the U.S. has a totalization agreement. These are bilateral agreements intended to accomplish three purposes. First, they eliminate dual social security coverage and taxes that multinational employers and employees encounter when workers temporarily reside in a foreign country with its own Social Security program. Under these agreements, U.S. employers and their workers sent temporarily abroad benefit by paying only U.S. social security taxes, and foreign businesses and their workers benefit by paying only Social Security taxes to their home country. Second, the agreements provide benefit protection to workers who have divided their careers between the U.S. and a foreign country, but do not qualify for benefits under one or both Social Security systems, despite paying taxes into both. Totalization agreements allow such workers to combine (totalize) work credits earned in both countries to meet minimum benefit qualification requirements. Third, totalization agreements generally improve the portability of Social Security benefits by authorizing the waiver of residency requirements. The U.S. has totalization agreements in effect with 21 countries—several western European countries, and others including Canada, Australia, Japan, South Korea, and Chile. (See table 4 in App. I for a list of countries with which the U.S. has totalization agreements.) In coordination with the State Department and DHS, SSA determines who is eligible for an SSN by verifying certain immigration documents and determining if an individual’s card requires a work restriction. Our 2003 report identified improvements SSA had made in its enumeration processes, but also pointed to continued weaknesses, some of which SSA and the Congress have since addressed. Under current law U.S. citizens are eligible for SSNs whether they were born in the U.S. or elsewhere. Depending on their immigration status, noncitizens may be eligible for one of three types of Social Security cards: regular cards, those cards valid for work only with authorization from the DHS, and nonwork SSN cards. 1. Regular Social Security card: The first and most common type of card is for individuals who are eligible to work. Individuals issued these SSNs receive a Social Security card showing their name and SSN without marked restriction. To be eligible for this card an individual must be one of the following U.S. citizen (whether foreign-born or not), noncitizen lawfully admitted to the U.S. for permanent residence (an immigrant), noncitizen with permission from the DHS to work permanently in member of a group eligible to work in the U.S. on a temporary basis (e.g., with a work visa, certain authorized workers in an approved exchange program). 2. DHS-authorized work card: A much less common type of Social Security card is issued to noncitizens who are eligible to work under limited circumstances. They receive a card showing the inscription “VALID FOR WORK ONLY WITH DHS AUTHORIZATION.” To be eligible for these cards noncitizens must have DHS permission to work temporarily in the U.S. SSA issues these cards to eligible workers, such as certain foreign students and spouses and children of exchange visitors. 3. Nonwork card: The third type of card is for people not eligible to work in the U.S. SSA sends recipients of these SSNs a card showing their name, SSN, and the inscription “NOT VALID FOR EMPLOYMENT.” To be issued these cards, noncitizens who are legally in the U.S. and do not have DHS permission to work must have been found eligible to receive a federally-funded benefit or are subject to a state or local law that requires them to have an SSN to get public benefits. Examples include Temporary Assistance for Needy Families, Supplemental Security Income, Social Security Survivor benefits, Medicaid, and Food Stamps. As of 2003, SSA had issued a total of slightly more than 7 million nonwork Social Security cards, but in recent years SSA has greatly reduced the number it issues. Our 2003 report identified improvements SSA had made in its enumeration processes, but also pointed to continued weaknesses, some of which SSA and the Congress have since addressed. We found that SSA has over the years improved document verifications and developed new initiatives to prevent the inappropriate assignment of SSNs to noncitizens. For example, SSA requires third-party verification of all noncitizen documents, such as a birth certificate, with DHS and the State Department before issuing an SSN. SSA also requires field staff to visually inspect documents before issuing an SSN. However, many field staff we interviewed at that time were relying heavily on DHS’s verification and neglecting SSA’s standard inspection practices, even though both were required. We found that SSA’s automated system for assigning SSNs was not designed to prevent issuing SSNs if field staff bypass required verification steps. We also found that SSA has undertaken new initiatives to shift the burden of processing noncitizen SSN applications and verifying documents away from its field offices. In late 2002, SSA began phasing in a new process for issuing SSNs to noncitizens, called “Enumeration at Entry” (EAE). Through this initiative, immigrants 18 and older can visit a State Department post abroad to apply for an SSN at the same time they apply for a visa to come to the U.S. The State Department and DHS authenticate the documents and transmit them to SSA, which then issues the SSN. Also, SSA was planning to expand the program over time to include other noncitizen groups, such as students and exchange visitors. In addition, SSA established a specialized center in Brooklyn, New York, which focuses exclusively on enumeration and utilizes the expertise of DHS immigration status verifiers and investigators from SSA’s Office of the Inspector General. More recently, SSA established a similar center in Las Vegas, Nevada. At the time we did our field work for the 2003 report, SSA had not tightened controls in two key areas of its enumeration process that could be exploited by individuals seeking fraudulent SSNs: the assignment of SSNs to children under age 1 and the replacement of Social Security cards. SSA requires third-party verification of the birth records for U.S.-born children age 1 and over, but calls only for a visual inspection of birth documents for children under age 1. In our field work, we found that this remains an area vulnerable to fraud. Working undercover and posing as parents of newborns, our investigators were able to obtain two SSNs using counterfeit documents. Since then the IRTPA was enacted and requires SSA to independently verify any birth documents other than for purposes of enumeration at birth. Until the passage of the IRTPA, SSA’s policy allowed individuals to obtain up to 52 replacement cards per year, leaving it vulnerable to misuse. While SSA requires noncitizens applying for a replacement SSN card to provide the same identity and immigration documents as if they were applying for an original SSN, SSA’s requirements for citizens were much less stringent. Individuals could obtain numerous replacement SSN cards with relatively weak or counterfeit documentation for a wide range of illicit uses, including selling them to noncitizens. Our 2003 report contained six recommendations to SSA. As shown in table 1 below, SSA has implemented all, except one concerning enhancement of its Modernized Enumeration System to prevent issuance of SSNs without use of required verification procedures. In the interim, however, SSA now requires staff to use a software tool that documents verification procedures. Although SSA has implemented our recommendation concerning an evaluation of the Enumeration at Entry program, the results of the evaluations prompted the SSA’s Office of Inspector General and Office of Quality Assurance and Performance Assessment to recommend several additional measures to correct errors during the early implementation of the program. To determine whether noncitizens are eligible for SSA benefits, SSA has implemented new procedures including some required by the SSPA. The SSPA tightened restrictions on payment of benefits to noncitizens who are not authorized to work. Generally both citizens and noncitizens in the U.S. accrue credits through paying Social Security payroll taxes. Noncitizens must also have authority to work in the U.S., and be lawfully present in the U.S. at the time they apply for benefits. Under some circumstances, unauthorized workers may receive benefits based on work credits they accrued while working without an immigration status permitting employment in the U.S., with a nonwork SSN or without a valid SSN during their work years. If noncitizens later receive a valid SSN and become eligible to work, they can show SSA their wage records and request credit for earnings from prior unauthorized work. If they establish legal immigration status, they may then receive benefit payments based on the earlier periods of unauthorized work. There are some exceptions for the lawful presence requirement, such as for workers covered under the terms of a totalization agreement. However, our work shows that SSA’s processes for entering into totalization agreements have been largely informal and do not mitigate potential risks. The enactment of the SSPA in 2004 tightened the eligibility requirements for paying Social Security benefits to noncitizens. Before SSPA, noncitizens who worked in covered employment could in some circumstances eventually earn SSA benefits without obtaining a work- authorized SSN. If noncitizens had no SSN, but were entitled to benefits, SSA would assign a nonwork SSN so their Social Security eligible earnings could be recorded. SSPA provides that in order to accrue benefits noncitizens with a SSN issued on or after January 1, 2004 must have authorization to work in the U.S. at the time that the SSN is assigned, or at some later time. Without work authorization, noncitizens and their dependents or surviving family members cannot receive any benefits. See table 2 below. Nonetheless to receive benefits while in the U.S., noncitizens must be legally present in the U.S. under immigration law regardless of when they were first assigned an SSN. Previously if noncitizens accrued Social Security benefits and resided outside the U.S., they could under some circumstances receive those benefits without ever having been legally present in the U.S. Since SSPA required all noncitizens originally assigned an SSN on or after January 1, 2004, to have a work authorized SSN to accrue benefits, those living outside the country must also obtain a work- authorized SSN. Obtaining a work-authorized SSN requires both lawful presence in the U.S. and an immigration status permitting work in the U.S. However, a noncitizen may receive benefits outside the U.S. if he or she is a citizen of a country that has a social insurance or pension system that pays benefits to eligible U.S. citizens residing outside that country or is a worker covered under a totalization agreement. A noncitizen not meeting any of these exceptions will have his or her benefits suspended beginning with the seventh month of absence. In general, totalization agreements between the U.S. and other countries provide mutually beneficial business, tax and other incentives to employers and employees, but the agreements also expose both countries to financial costs and risk. Our recent reports on totalization agreements identified two fundamental vulnerabilities in SSA’s existing procedures when entering into totalization agreements. First, our analysis demonstrated that the agency’s actuarial estimates for the number of foreign citizens who would be affected by an agreement (and thus entitled to U.S. Social Security benefits) have overstated or understated the number, usually by more than 25 percent. As a result, depending on the size of the foreign population covered by an agreement, the actual cost to the Social Security trust fund from a given agreement could be greater or smaller than predicted. In response to our recommendation to improve its process for projecting costs to the trust fund from totalization agreements, SSA responded that it cannot eliminate all variations between projected costs and subsequent actual experience. Secondly, our work has shown that SSA’s processes for entering into these agreements have been largely informal and have not included specific steps to assess and mitigate potential risks to the U.S. Social Security system. For example, we found that SSA’s procedures for verifying critical information such as foreign citizens’ earnings, birth, and death data were insufficient to ensure the integrity of such information. Inaccurate or incomplete information could lead to improper payments from the Social Security trust fund. In response to our recommendations, SSA developed several new initiatives to identify risks associated with totalization agreements. For example, SSA developed a standardized questionnaire to help the agency identify and assess the reliability of earnings data in countries that may be considered for future totalization agreements. In addition, SSA is conducting numerous “vulnerability assessments” to detect potential problems with the accuracy of foreign countries’ documents. SSA is also exploring a more systematic approach for independently verifying foreign countries’ data, such as the use of computer matches. (For a summary of the status of recommendations, see table 3 below.) Laws and policies are in place to ensure that SSA treats noncitizens fairly in the issuance of SSNs, the provision of benefits, and in cases where they are covered under the terms of totalization agreements. Recent legislation and revisions to SSA policies represent some progress in these areas. While SSA is making progress in improving the program’s integrity by strengthening its procedures for verifying documents and coordinating with other agencies and foreign governments, opportunities remain for additional progress. SSA plans further enhancements to the Enumeration at Entry program in order to protect against errors, fraud and abuse. In addition, a more systematic approach to verifying data from other countries with which we have totalization agreements can help ensure proper payments of benefits and prompt notice of the death of beneficiaries. SSA will, however, continue to face challenges in its dealings with noncitizens. Changes in immigration laws and shortcomings in the enforcement of those laws make it difficult for SSA to identify noncitizens who are eligible for SSNs and for benefit payments. Continued attention to these issues by both SSA and the Congress is essential to ensure that noncitizens receive benefits to which they are entitled and the integrity of the Social Security program is protected. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I’d be happy to answer any questions you may have. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues at (202) 512-7215. Blake L. Ainsworth, Assistant Director; Alicia Puente Cackley, Assistant Director; Benjamin P. Pfeiffer; Anna H. Bonelli; Jeremy D. Cox; Jacqueline Harpp; Nyree M. Ryder; Daniel A. Schwimer; and Paul C. Wright also contributed to this report.
In 2004, an estimated 35.7 million foreign-born people resided in the United States, and many legitimately have SSNs. Many of these individuals have Social Security numbers (SSNs) which can have a key role in verifying authorization to work in the United States. However, some foreign-born individuals have been given SSNs inappropriately. Recent legislation, aimed at protecting the SSN and preventing fraud and abuse, changes how the Social Security Administration (SSA) assigns numbers and awards benefits for foreign-born individuals. The chairman of the Subcommittee on Social Security asked GAO to address two questions. First, how does SSA determine who is and is not eligible for an SSN? Second, how does SSA determine who is and is not eligible for Social Security benefits? SSA determines who is eligible for an SSN by verifying certain immigration documents and determining if an individual's card requires a work restriction. Some foreign-born individuals are eligible for one of three kinds of Social Security cards depending in part on their immigration status: (1) regular cards, (2) those valid for work only with authorization from the Department of Homeland Security (DHS), and (3) those that are not valid for work--non-work cards. As of 2003 SSA had issued slightly more than 7 million non-work cards to people who need them to receive benefits for which they were otherwise entitled. Both SSA's Inspector General and GAO have identified weaknesses in SSA procedures for assigning SSNs and issuing cards, also known as enumeration. For example, working undercover and posing as parents of newborns, GAO investigators were able to obtain Social Security cards by using counterfeit documents. Congress has enacted recent legislation strengthening the SSN enumeration process and documentation requirements. SSA is implementing the law and is improving document verification and now requires third-party verification of noncitizen documents such as birth certificates and visual inspection of documents before issuing an SSN. SSA also continues to strengthen program integrity by, for example, restricting the number of replacement cards. Congress and SSA have also improved laws and procedures designed to strengthen program integrity in the payment of benefits to the foreign-born. Due to provisions of the Social Security Protection Act of 2004, some foreign-born individuals who were not authorized to work will no longer be eligible for benefits. To be entitled to benefits, the law requires noncitizens originally assigned an SSN after 2003 to have a work-authorized SSN. Amendments to the Social Security Act in 1996 require individuals to be lawfully present in the U.S. to receive Social Security benefits, though some noncitizens can receive benefits while living abroad, such as noncitizens who have worked in the U.S. and in a country with which the U.S. has a totalization agreement. SSA's totalization agreements coordinate taxation and public pension benefits. The agreements help eliminate dual taxation and Social Security coverage that multinational employers and employees encounter when workers temporarily reside in a foreign country with its own Social Security program. Successful implementation of these agreements requires the countries involved to carefully coordinate and verify data they exchange. Computer matches with foreign countries, for example, may help protect totalization programs from making payments to ineligible individuals. SSA is exploring options for undertaking such exchanges.
Managed by U.S. Customs and Border Protection (CBP), SBInet is to strengthen DHS’s ability to detect, identify, classify, track, and respond to illegal breaches at and between ports of entry. It includes the acquisition, development, integration, deployment, and operations and maintenance of a mix of surveillance technologies, such as cameras, radars, sensors, and C3I technologies. Surveillance technologies include unattended ground sensors that can detect heat and vibrations associated with foot traffic and metal associated with vehicles, radar mounted on fixed towers that can detect movement, and cameras mounted on fixed towers that can be used to identify and classify items of interest detected and tracked by ground sensors and radar. These technologies are generally commercial off-the- shelf products. C3I technologies include customized software development to produce a common operating picture (COP)—a uniform presentation of activities within specific areas along the border—as well as the use of CBP network capabilities. Together, the surveillance technologies are to gather information along the border and transmit this information to COP terminals located in command centers, which are to assemble it to provide CBP agents with border situational awareness, to, among other things, enhance agents’ tactical decisionmaking regarding potential apprehensions. Since fiscal year 2006, DHS has received about $4.4 billion in appropriations for SBI, including about $2.5 billion for physical fencing and related infrastructure, about $1.5 billion for virtual fencing (e.g., surveillance systems) and related infrastructure (e.g., towers), and about $300 million for program management. As of May 2010, DHS had obligated about $1.02 billion for SBInet. The SBI Program Management Office, which is organizationally within CBP, is responsible for managing key acquisition functions associated with SBInet, including prime contractor management and oversight. It is organized into four components: the SBInet System Program Office (SPO), Operational Integration Division, Business Operations Division, and Systems Engineering Division. The SPO is responsible for such key contractor oversight activities as verifying and accepting contract deliverables and conducting contractor management and technical reviews. In addition, the Business Operations Division has primary responsibility for monitoring the prime contractor’s cost and schedule performance. As of May 2010, the SBI Program Management Office was staffed with 194 people—105 government employees, 78 contractor staff, and 11 detailees. (See figure 1 for a partial SBI program organization organization chart.) chart.) In addition, CBP has engaged the Defense Contract Management Agency to, among other things, perform surveillance of the prime contractor’s EVM, systems engineering, hardware and software quality assurance, and risk management. Further, the SBI Contracting Division, which is organizationally within the CBP Office of Administration Procurement Directorate’s Enterprise Contracting Office, is responsible for performing contract administration activities, such as maintaining the contract file and notifying the contractor in writing of whether deliverables are accepted or rejected. The SBI Contracting Division is organized into three branches: SBI Contracting, SBInet System Contracting, and SBInet Deployment Contracting. As of May 2010, the SBI Contracting Division was staffed with 22 people—18 government employees and 4 contractor staff. (See figure 2 for a partial SBI Contracting Division organization chart.) In September 2006, CBP awarded an Indefinite Delivery/Indefinite Quantity (IDIQ) prime contract to the Boeing Company. The contract was for 3 base years with three additional 1-year options for designing, producing, testing, deploying, and sustaining SBI. In September 2009, CBP exercised the first option year. Under the prime contract, CBP has issued 11 task orders that relate to SBInet, covering for example, COP design and development, system deployment, and system maintenance and logistics support. As of May 2010, 5 of the 11 task orders were complete and 6 were ongoing. (See table 1 for a summary of the SBInet task orders.) Through the task orders, CBP’s strategy is to deliver SBInet capabilities incrementally. To accomplish this, the program office adopted an evolutionary system life cycle management approach in which system capabilities are to be delivered to designated locations in a series of discrete subsets of system functional and performance capabilities that are referred to as blocks. The first block (Block 1) includes the purchase of commercially available surveillance systems, development of customized COP systems and software, and use of existing CBP communications and network capabilities. According to program officials, as of July 2010, the Block 1 design was deployed to TUS-1 and was being deployed to AJO-1, both of which are located in CBP’s Tucson Sector of the southwest border. Together, these two deployments cover 53 miles of the 1,989-mile-long southern border. Also, according to program officials, as of July 2010, TUS-1 and AJO-1 were to be accepted in September 2010 and December 2010, respectively. In January 2010, the Secretary of Homeland Security ordered an assessment of the SBI program. In addition, on March 16, 2010, the Secretary froze fiscal year 2010 funding for any work on SBInet beyond TUS-1 and AJO-1 until the assessment is completed. Also at that time, the Secretary redirected $35 million that had been allocated to SBInet Block 1 deployment to other tested and commercially-available technologies, such as truck-mounted cameras and radar, called Mobile Surveillance Systems, to meet near-term needs. According to the SBI Executive Director, the department’s assessment would consist of a comprehensive and science- based assessment to determine if there are alternatives to SBInet that may more efficiently, effectively, and economically meet U.S. border security needs. Further, the Executive Director stated that if the assessment suggests that the SBInet capabilities are worth the cost, DHS will extend its deployment to sites beyond TUS-1 and AJO-1. However, if the assessment suggests that alternative technology options represent the best balance of capability and cost-effectiveness, DHS will redirect resources to these other options. According to program officials, the initial phase of the assessment, which addresses the Arizona border, was completed in July 2010, and the results are currently being reviewed by senior DHS management. Officials were unable to provide a date for completion of the review. Our research and evaluations of information technology programs have shown that delivering promised system capabilities and benefits on time and within budget largely depends on the extent to which key acquisition management disciplines are employed. These disciplines include, among others, requirements management, risk management, and test management. As is discussed in the following section, we have previously reported on the extent to which these disciplines have been employed on SBInet. Contractor management and oversight, which is the focus of this report, is another acquisition management discipline. Among other things, this discipline helps to ensure that the contractor performs the requirements of the contract and the government receives the service or product intended. DHS acquisition policies and guidance, along with other relevant guidance, recognize the importance of these management and oversight disciplines. As we have previously reported, not employing them increases the risk that system acquisitions will not perform as intended and will require expensive and time-consuming rework. Since 2007, we have identified a range of management weaknesses and risks facing SBInet, and we have made a number of recommendations to address them that DHS has largely agreed with, and to varying degrees, taken actions to address. For example, in February 2007, we reported that DHS had not fully defined activities, milestones, and costs for implementing the program or demonstrated how program activities would further the strategic goals and objectives of SBI. Further, we reported that the program’s schedule contained a high level of concurrency among related tasks and activities, which introduced considerable risk. Accordingly, we recommended that DHS define explicit and measurable commitments relative to, among other things, program capabilities, schedules, and costs, and re-examine the level of concurrency in the schedule and adjust the acquisition strategy appropriately. In September 2008, we reported that a number of program management weaknesses put SBInet at risk of not performing as intended and taking longer and costing more to deliver than was necessary. Specifically, we reported the following: Important aspects of SBInet were ambiguous and in a continued state of flux, making it unclear and uncertain what technology capabilities were to be delivered when. For example, the scope and timing of planned SBInet deployments and capabilities had continued to change since the program began and remained unclear. Further, the SPO did not have an approved integrated master schedule to guide the execution of the program, and our assimilation of available information indicated that key milestones continued to slip. This schedule-related risk was exacerbated by the continuous change in and the absence of a clear definition of the approach used to define, develop, acquire, test, and deploy SBInet. Accordingly, we concluded that the absence of clarity and stability in these key aspects of SBInet impaired the ability of Congress to oversee the program and hold DHS accountable for results, and it hampered DHS’s ability to measure the program’s progress. SBInet requirements had not been effectively defined and managed. While the SPO had issued guidance that defined key practices associated with effectively developing and managing requirements, the guidance was developed after several key activities had been completed. In the absence of this guidance, the SPO had not effectively performed key requirements development and management practices, such as ensuring that different levels of requirements were properly aligned. As a result, we concluded that the risk of SBInet not meeting mission needs and performing as intended was increased, as were the chances of the system needing expensive and time-consuming rework. SBInet testing was not being effectively managed. For example, the program office had not tested the individual system components to be deployed to initial locations, even though the contractor had initiated integration testing of these components. Further, its test management strategy did not contain, among other things, a clear definition of testing roles and responsibilities or sufficient detail to effectively guide planning for specific test events. To address these issues, we recommended that DHS assess and disclose the risks associated with its planned SBInet development, testing, and deployment activities and that it address the system deployment, requirements management, and testing weaknesses that we had identified. DHS largely agreed to implement our recommendations. In September 2009, we reported that SBInet had continued to experience delays. For example, deployment to the entire southwest border had slipped from early 2009 to 2016, and final acceptance of TUS-1 and AJO-1 had slipped from November 2009 and March 2010 to December 2009 and June 2010, respectively. We did not make additional recommendations at that time. In January 2010, we reported that SBInet testing was not being effectively managed. Specifically, we reported the following: Test plans, cases, and procedures for the most recent test events were not defined in accordance with important elements of relevant guidance. Further, a large percentage of the test cases for these events were changed extemporaneously during execution. While some of the changes were minor, others were more significant, such as rewriting entire procedures and changing the mapping of requirements to test cases. Moreover, these changes to procedures were not made in accordance with documented quality assurance processes, but rather were based on an undocumented understanding that program officials said they established with the contractor. Compounding the number and significance of changes were questions raised by the SPO and a support contractor about the appropriateness of some changes, noting that some changes made to system qualification test cases and procedures appeared to be designed to pass the test instead of being designed to qualify the system. About 1,300 SBInet defects had been found from March 2008 through July 2009, with the number of new defects identified during this time generally increasing faster than the number being fixed—a trend that is not indicative of a system that is maturing and ready for deployment. While the full magnitude of these unresolved defects was unclear because the majority were not assigned a priority for resolution, some of the defects that had been found were significant. Although DHS reported that these defects had been resolved, they had nevertheless caused program delays, and related problems had surfaced that continued to impact the program’s schedule. In light of these weaknesses, we recommended that DHS (1) revise the program’s overall test plan; (2) ensure that test schedules, plans, cases, and procedures are adequately reviewed and approved consistent with the revised test plan; (3) ensure that sufficient time is provided for reviewing and approving test documents prior to beginning a given test event; and (4) triage the full inventory of unresolved system problems, including identified user concerns, and periodically report on their status to CBP and DHS leadership. DHS fully agreed with the last three recommendations and partially agreed with the first. In May 2010, we raised further concerns about the program and called for DHS to reconsider its planned investment in SBInet. Specifically, we reported the following: DHS had defined the scope of the first incremental block of SBInet capabilities that it intended to deploy and make operational; however, these capabilities and the number of geographic locations to which they were to be deployed continued to shrink. For example, the geographic “footprint” of the initially deployed capability has been reduced from three border sectors spanning about 655 miles to two sectors spanning about 387 miles. DHS had not developed a reliable integrated master schedule for delivering the first block of SBInet. Specifically, the schedule did not sufficiently comply with seven of nine key practices that relevant guidance states are important to having a reliable schedule. For example, the schedule did not adequately capture all necessary activities, assign resources to them, and reflect schedule risks. As a result, it was unclear when the first block was to be completed, and continued delays were likely. DHS had not demonstrated the cost-effectiveness of Block 1. In particular, it had not reliably estimated the costs of this block over its entire life cycle. Further, DHS had yet to identify expected quantifiable and qualitative benefits from this block and analyze them relative to costs. Without a meaningful understanding of SBInet costs and benefits, DHS lacks an adequate basis for knowing whether the initial system solution is cost effective. DHS had not acquired the initial SBInet block in accordance with key life cycle management processes. While processes associated with, among other things, requirements development and management and risk management, were adequately defined they were not adequately implemented. For example, key risks had not been captured in the risk management repository and thus had not been proactively mitigated. As a result, DHS is at increased risk of delivering a system that does not perform as intended. We concluded that it remains unclear whether the department’s pursuit of SBInet is a cost-effective course of action, and if it is, that it will produce expected results on time and within budget. Accordingly, we recommended that DHS (1) limit near-term investment in the first incremental block of SBInet, (2) economically justify any longer-term investment in SBInet, and (3) improve key program management disciplines. DHS largely agreed with our recommendations, and noted ongoing and planned actions to address each of them. In June 2010, we reported several technical, cost, and schedule challenges facing SBInet, such as problems with radar functionality, and noted that the program office was staffed substantially below planned levels for government positions. We did not make any additional recommendations at that time. Federal regulations and relevant guidance recognize effective contractor management and oversight as a key acquisition management discipline. In addition, they describe a number of practices associated with it, including training persons who are responsible for contractor management and oversight, verifying and deciding whether to accept contract deliverables, conducting technical reviews with the contractor to ensure that products and services will satisfy user needs, and holding management reviews with the contractor to monitor contractor performance. To DHS’s credit, it has largely defined key practices aimed at employing each of these contractor management and oversight controls. Moreover, it has implemented some of them, such as training key contractor management and oversight officials and holding management reviews with the prime contractor. However, it has not effectively implemented others, such as documenting contract deliverable reviews and using entry and exit criteria when conducting technical reviews. Reasons for these weaknesses include limitations in the defined verification and acceptance deliverable process and a SPO decision to exclude some deliverables from the process, and insufficient time to review technical review documentation. Without employing the full range of practices needed to effectively manage and oversee the prime contractor, DHS is limited in its ability to know whether the contractor is meeting performance and product expectations. Moreover, it increases the chances that SBInet will not function as intended and will take more time and resources than necessary to deliver, which we have previously reported is the case for Block 1. Training supports the successful performance of relevant activities and tasks by helping to ensure that the responsible people have the necessary skills and expertise to perform the tasks. According to relevant guidance, organizations should define training expectations and should ensure that these expectations are met for individuals responsible for contractor management and oversight. DHS’s acquisition management directives define training requirements for, among others, program managers, contracting officers, and contracting officer’s technical representatives (COTR). Specifically: Program managers must be certified at a level commensurate with their acquisition responsibilities. For a Level 1 information technology program, like SBInet, the designated program manager must be certified at a program management Level 3. Contracting officers must be certified at the appropriate level to support their respective warrant authority. COTRs must be trained and certified within 60 days of appointment to a contract or task order. The minimum mandatory requirements are the completion of 40 hours of COTR training and a 1-hour procurement ethics training course. For SBInet, CBP has ensured that people in each of these key positions have been trained in accordance with DHS requirements. Specifically, the two program managers associated with the development and deployment of SBInet Block 1—the SBI Executive Director and the SPO Executive Director—were issued training certifications that qualify each as an Acquisition Professional. Further, each contracting officer responsible for executing actions on the Arizona Deployment Task Order (ADTO), the Design Task Order (DTO), the C3I/COP task order, and the System Task Order (STO) between June 2008 and February 2010 was certified commensurate with his or her respective warrant authority. In addition, for the same time period, each COTR assigned to each of the four task orders received DHS-issued certificates indicating that each had met the minimum training requirements before being assigned to a task order. According to CBP officials, DHS leadership has made contractor management and oversight training a high priority, which helped to ensure that key officials were trained. By doing so, CBP has established one of the key controls to help ensure that prime contractor products and services meet expectations. Effectively implementing a well-defined process for verifying and accepting prime contractor deliverables is vital to SBInet’s success. DHS has a defined process for verifying and accepting contract deliverables, but this process does not ensure that deliverable reviews are sufficiently documented. Further, while the SPO has followed key aspects of this process, it has not effectively documented its review of certain deliverables and has not effectively communicated to the prime contractor the basis for rejecting all deliverables. Reasons for not doing so include limitations in the defined verification and acceptance process and a SPO decision to exclude some deliverables from the process. Without documenting all its reviews and effectively communicating the review results to the contractor, the SPO has increased the chances of accepting deliverables that do not meet requirements and having the contractor repeat work to correct deliverable problems. The purpose of contractor deliverable verification and acceptance is to ensure that contractor-provided products and services meet specified requirements and otherwise satisfy the terms of the contract. According to relevant guidance, organizations should have written policies and procedures for verifying and accepting deliverables that, among other things, (1) assign roles and responsibilities for performing verification and acceptance tasks and (2) provide for conducting and documenting deliverable reviews and for effectively communicating to the contractor deliverable acceptance and rejection decisions. To its credit, CBP has defined policies and procedures for verifying and accepting SBInet deliverables. Specifically, it issued its Deliverable Review and Approval Process in July 2007, which specifies how the SPO is to receive, review, and respond to all contract deliverables. Among other things, this guide assigns verification and acceptance roles and responsibilities to key program management positions. For example, it assigns the project manager responsibility for overseeing the review process and determining the acceptability of the deliverables, designates reviewers responsibilities for examining the deliverable, and assigns the contracting officer responsibility for communicating the decision to accept or reject the deliverable to the contractor. In addition, it provides for conducting deliverable reviews, which according to program officials, involves comparing the deliverable to the requirements enumerated in the applicable task order statement of work (typically within the Data Item Description). The process further specifies that the decision to accept or reject the deliverable is to be communicated in writing to the contractor. However, the process does not state that the results of all reviews are to be documented. Instead, its states that a deliverable review comment form is to be prepared only when deficiencies or problems exist. If the deliverable is acceptable, the form does not need to be prepared. Program officials could not explain why review documentation was not required for acceptable deliverables. As a result, and as discussed in the next section of this report, the SPO cannot demonstrate its basis for accepting a number of deliverables, which in turn has increased the risk of, and actually resulted in, deliverables being accepted that do not meet requirements. The SPO followed its process for verifying and accepting SBInet-related deliverables about 62 percent of the time, based on the 29 deliverables that we reviewed. Specifically, the process was fully followed for 18 of the deliverables: (1) 6 that were accepted without documented review comments, (2) 5 that were accepted with documented review comments, and (3) 7 that were rejected with documented review comments. In addition, the acceptance or rejection of all 18 deliverables was communicated in writing to the contractor. For example, the ADTO Security Plan Addendum was delivered to the SPO in August 2008. The SPO reviewed the plan and documented its review comments, which included a determination that the plan did not address all required items specified in the task order’s Data Item Description. The CBP contracting officer subsequently notified the contractor in writing that the plan was rejected for this and other reasons. In February 2009, the contractor resubmitted the plan, the SPO reviewed the plan and documented its comments, and the contracting officer again notified the contractor in writing that the plan was rejected. The contractor resubmitted the plan in May 2009, and program officials documented their review of the deliverable. The contracting officer subsequently communicated the deliverable’s acceptance in writing to the contractor. (Figure 3 summarizes the number of contract deliverables that did and did not follow the defined process.) The remaining 11 deliverables, however, did not fully adhere to the defined process. Of these, five were accepted without any documented review comments and without communicating the acceptance in writing to the contractor. The following are examples of these five and reasons for not adhering to the process. Three of the five deliverables related to the C3I/COP task order did not require government approval. However, the Deliverable Review and Approval Process document does not provide for such treatment of these deliverables, and thus this practice is not in accordance with the process. Program officials told us that they have since recognized that this was a poor practice, and they have modified the task order to now require approval of all C3I/COP task order deliverables. One of the five deliverables relates to the STO and is for the C2 Component Qualification Test package, which includes, among other things, the test plan and test cases and procedures for conducting the C2 qualification test event. In this case, the SPO accepted the test plan because, according to program officials, several days had passed since the deliverable was received, and they had not received any comments from the reviewers. They said that they therefore accepted the deliverable on the basis of not receiving any review comments to the contrary, but did not notify the contractor in writing of the acceptance. The 11 deliverables also include 3 that were rejected without documented review comments and without the rejection being communicated to the contractor in writing. The following are examples of these three and reasons for not adhering to the process are discussed below. One of the three deliverables relates to the C3I/COP task order and is for the Network Operations Center/Security Operations Center (NOC/SOC) Test Plan/Procedures/Description. According to program officials, the contractor did not submit the plan on time, thus requiring them to review it during the readiness review. Based on this review, the plan was rejected, which was communicated verbally to the contractor during the review. Despite rejecting the plan, the program office began testing the NOC/SOC component on the day of the review, without a revised plan being submitted, reviewed, and approved. According to program officials, this occurred in part because of insufficient time and resources to review contractor test-related deliverables. Another one of the three deliverables also relates to the C3I/COP task order, and is for the Release 0.5 Software Test Plan/Procedures/ Description. According to program officials, the contractor submitted the plan late. The program office rejected the plan and provided oral comments during a teleconference prior to the review. Nevertheless, the test event again occurred without a revised plan being submitted, reviewed, and accepted. According to program officials, this was also due to insufficient time and resources to review the test plan. In this case, the plan was approved in late April 2009, which was 5 months after the test event was conducted. The 11 deliverables also include 3 for which a decision to accept or reject the deliverable was not made. See the following examples: One of the three relates to the C3I/COP task order, and is for the NOC/SOC Interface Control Document. For this deliverable, review comments were not documented and no written communication with the contractor occurred. The deliverable was subsequently submitted three times and ultimately accepted. However, program officials could not explain whether the initial deliverable was accepted or rejected, or why the deliverable was submitted multiple times. Another one of these relates to STO, and is for the Dynamic Object- Oriented Requirements System. For this deliverable, review comments were not documented, but CBP communicated in writing to the contractor that it was withholding comment on this submission of the deliverable and was to provide a consolidated set of comments on the subsequent submission. Subsequently, the contractor resubmitted the deliverable, and because it was accepted, the review was not documented. The contracting officer communicated the deliverable’s acceptance in writing to the contractor. By not effectively verifying and accepting contractor deliverables, the SPO cannot ensure that the deliverables will satisfy stated requirements, thus increasing the risk of costly and time-consuming rework. For example, we recently reported that contractor-delivered test plans were poorly defined and resulted in problems during testing. In particular, NOC/SOC testing was hampered by requirements incorrectly mapped to test cases, did not provide for testing all requirements, and required significant extemporaneous changes to test cases during the test events. As a result of the testing problems, the SPO had to conduct multiple test events. Technical reviews are performed throughout the project life cycle to confirm that products and services being produced by the contractor provide the desired capability and ultimately satisfy user needs. To its credit, DHS has defined a process for conducting technical reviews, but it has not effectively implemented it. In particular, the SPO did not ensure that all key documentation was reviewed and relevant criteria were satisfied before concluding key technical reviews. Program officials attributed these limitations to the program’s aggressive schedule, which resulted in insufficient time to review relevant documentation. Concluding technical reviews without adequate justification has resulted in schedule delays and costly rework. According to relevant guidance, organizations should have written policies and procedures for conducting technical reviews that, among other things, (1) assign roles and responsibilities for performing the specific technical review tasks and (2) establish entry and exit criteria to determine the readiness of the technical solution to proceed to the technical review and to demonstrate and confirm completion of required accomplishments. To its credit, DHS has policies and guidance for conducting technical reviews. Specifically, DHS’s Systems Engineering Life Cycle outlines the key reviews to be performed as well as how these reviews are aligned with the department’s governance process. In addition, DHS guidance defines expectations for technical review exit criteria, stating that compliance with exit criteria is based upon the satisfaction of the content of the criteria, and not upon only the delivery of specified documents. To augment DHS policy and guidance, the SBInet Systems Engineering Plan (SEP), dated November 2008, identifies and describes the technical reviews to be conducted. They include, for example: Requirements Review, which is to ensure that requirements have been completely and properly identified and are understood by the SPO and the contractor. Documentation associated with this review is to include, among other things, a requirements traceability matrix (i.e., a tool for demonstrating that component-level requirements are traceable to higher- level system level requirements). Critical Design Review (CDR), which is to (1) demonstrate that the designs are complete and baselined and (2) ensure that the solution is ready for fabrication, coding, assembly, and integration. Documentation for this review is to include, among other things, (1) baselined requirements, (2) interface descriptions, and (3) identified risks and mitigation plans. Test Readiness Review, which is to assess the readiness of the system solution to begin f include, among other things, (1) test plans that include test cases and procedures and (2) a traceability matrix that maps each requirement to tested to a corresponding test case. ormal testing. Documentation for this review is to In addition, the SBInet SEP describes high-level roles and responsibilities for performing these reviews, and establishes entry and exit criteria for each. For example, it states that the SPO program manager and the chief engineer are responsible for leading the reviews. Further, the SEP defines entry and exit criteria for the CDR. For example: Entry. System-level requirements should be traceable to component-level requirements; system internal and external interfaces should be defined. Exit. Design baseline should be established and balanced across cost, schedule, performance, and risk considerations over the investment’s lifecycle; system risks should be identified and mitigation plans should be in place. The SPO did not follow the defined process for conducting technical reviews defined in the respective task orders to guide each review. However, the task orders do not define entry and exit criteria. Rather, they list a set of documents that the contractor is to provide and the SPO is to review. For example, for the Block 1 CDR, the relevant task order requires that the contractor deliver, among other documents, (1) baselined component and system requirements, (2) interface descriptions (i.e., descriptions of the data to be exchanged and the protocols used to exchange the data), and. Instead, program officials told us that they used the requirements (3) all identified risks and mitigation plans for those risks. However, the task orders do not associate these documents with either entry or exit criteria, and they do not specify characteristics or qualities that the documents are to satisfy. Without explicit entry and exit criteria, the basis for beginning and e the technical reviews is unclear, thus increasing the risk that a program will be allowed to proceed and begin the next phase of development before it is ready to do so. In fact, this risk was realized for SBInet. Technical reviews were concluded without adequate justification, which ultimately resulted in problems that required additional time and resources to fix. For example: nding NOC/SOC Requirements Review. At this review, the contractor did not deliver a requirements traceability matrix, as required by the relevant task order, until almost a month after the review was completed. Nonetheless, program officials stated that they concluded the review in June 2008, without knowing whether the applicable higher-level system requirements were fully satisfied. Block 1 CDR. For this review, the contractor delivered (1) the baselined component and syste (3) all identified risks and mitigation plans for those risks. m requirements, (2) the interface descriptions, and However, these deliverables did not demonstrate that all component-leve requirements were baselined and interface descriptions we As we previously reported, baselined requirements associated with the NOC/SOC were not adequately defined at the time of the CDR, as evidenced by the fact that they were significantly changed 2 months later.Program officials stated that while they knew that requirements were not adequately baselined at this review, they believed that the interface requirements were understood well enough to begin system development. However, this was also not the case. Specifically, 39 of 90 NOC/SOC interface requirements were removed from the baseline, and 2 new interface requirements were added after CDR. l re understood. Further, all relevant risks were not identified, and not all identified r had mitigation plans. Specifically, 7 of 31 identi mitigation plans, including risks associated with poorly established requirements traceability and inadequately defined requirements for integration suppliers. Moreover, the risks identified were as of May 200 prior to the beginning of CDR, and did not include four risks identifi ed between June and October 2008, when CDR was concluded. For exam a risk associated with the instability of the C3I/COP software was not addressed during CDR. fied risks did not have Without properly baselined requirements (including interfaces) and proactive mitigation of known risks, system performance shortfalls are likely. To illustrate, we previously reported that ambiguities in requirements actually forced testers to rewrite test steps during exec based on interpretations of what they thought the requirements meant, ution and they required the SPO to incur the time and expense of conducting multiple events to test NOC/SOC requirements. NOC/SOC Component Qualification Test Rea SPO did not ensure that a well-defined test plan was in place, to include, among other things, test cases and procedures and a traceability matrix that maps each requirement to be tested to a corresponding test case. Specifically, the contractor delivered the test plan on the day of the review, rather than 10 days prior to the review, as required by the relev task order. Nevertheless, the SPO concluded the review based on its review of the plan during the test readiness review. In this regard, we previously reported problems with the NOC/SOC test plan, noting tha plan mapped 28 out of 100 requirements to incorrect test cases. Progra officials attributed the test plan limitations to, among other things, insufficient time and resources to review the deliverables. The SBInet independent verification and validation (IV&V) also identified weaknesses within technical reviews. Specifically, the IV&V contractor reported that the SPO was not provided with documentation, including the test plan, early enough for the NOC/SOC test readiness review to allow sufficient time for review. Moreover, in December 2009, the program identified technical oversight of technical review milestones as a major risk to the cost, schedule, and performance goals of the program. According to program officials, they are developing a technical review manual that is to supplement the SEP and provide detailed guidance for conducting technical reviews. In commenting on a draft of this report, DHS stated that it plans to complete and implement its technical review guide by December 2010. Management reviews help to ensure that the contractor’s interpretation and implementation of the requirements are consistent with those of the program office. According to relevant guidance, organizations should have written policies and procedures for conducting management reviews tha t, among other things, (1) involve relevant stakeholders; (2) assign roles andresponsibilities for performing management review tasks; (3) communicate project status information, including cost and schedule information, and risks; and (4) identify, document, and track action items to closure. CBP policy also recognizes the importance of these reviews by requiring the conduct of management reviews. Management Plan identifies the types of management reviews that are to be conducted with the contractor. For SBInet, the primary management The review is known as the Joint Program Management Review (JPMR). plan also identifies, for example, the stakeholders that are to participate in the reviews, including the program manager, project managers, program control staff, and the risk management team; and it specifies the topics that are to be discussed at the reviews, such as project status, cost and schedule performance, and risks. CBP, System Life Cycle Handbook, Version 1.2 (Sept. 30, 2008). risk management process and tool, including reviewing lessons learnedfrom other programs. The results of the review were presented during a February 2010 briefing, and the action item was closed. Effectively conducting management reviews has helped program leadership with an understanding of the contractor’s progress and the program’s exposure to risks so that appropriate corrective a can be taken and the chances of delivering a system solution that meets mission needs within budget are enhanced. However, as discussed in the next section, the EVM performance data presented at these management reviews were not reliable, thus rendering those reviews, at best, limited in the extent to which they disclosed the true status of the program. Measuring and reporting progress against cost and schedule expectations ( i.e., baselines) is a vital element of effective contractor management and oversight. As noted earlier, EVM provides a proven means for measuring progress against cost and schedule commitments and thereby identifying potential cost overruns and schedule delays early, when the impact can be minimized. However, DH S has not ensured that its prime contractor’s EVM system, which was certified as meeting relevant standards, has been effectively implemented on SBInet. In particular, it has not ensured that performanc measurement baselines were validated in a timely manner, that establish baselines were complete and realistic, and that contractor-provided cost and schedule data were reliable. Reasons cited by program officials for these weaknesses include the instability in the scope of the work to be performed, an unexpected temporary stop in Block 1 design and deployment work when SBInet funding was redirected, and the contractor’s use of estimated, rather than actual, costs for subcon work, which are subsequently adjusted when actual costs are rec Without effectively implementing EVM, DHS has not been positioned to identify potential cost and schedule problems early, and thus has not bee able to take timely actions to correct problems and avoid program schedule delays and cost increases. eived. In August 2005, the Office of Management and Budget issued guidance that, among other things, directs agencies to ensure that EVM systems are compliant with the American National Standards Institute (ANSI) standard. The ANSI standard consists of 32 guidelines associated with a sound EVM system that are intended to ensure that data are reliable and can be used for informed decision-making. The program office relies on the prime contractor’s EVM system to provide cost and schedule performance data. This system was certified in April 2005 by DCMA as being compliant with the ANSI standard. DCMA certified the contractor’s EVM system again in February 2009. Notwithstanding these certifications, DCMA identified a number of issues with the contractor’s implementation of its EVM system. In particular, in January 2010, DCMA reported that the SBInet prime contractor’s implementation of EVM was not consistent with all of the 32 ANSI guidelines. Specifically, DCMA identified concerns with the quality of scheduling and reporting, and the identification of significant differences between planned and actual cost and schedule performance, as well as reasons for those differences. According to relevant guidance, the performance measurement baseline, which is the foundation of an EVM system and the estimated cumulative value of planned work, serves as the value against which performance is measured for the life of the program or task order. As such, it should be established as early as possible after contract or task order award, or whenever a major contract modification or baseline change occurs. DHS guidance further states that a baseline should be validated within 90 days of the contract or task order award. However, the program office validated a performance measurement baseline within 90 days for only two of the six baselines that we reviewed (see figure 4). For the other four, the length of time to establish a validated baseline ranged from 5 to 10 months. For example, the program office issued the ADTO in June 2008, and it did not establish a validated baseline until 10 months later in April 2009. Similarly, in February 2009, the program office modified the scope of the STO and extended the period of performance, but it did not validate the revised baseline to include the additional scope and time until 7 months later in September 2009. Figure 4 summarizes the periods of time during which earned value was, and was s not, measured against a validated baseline. not, measured against a validated baseline. According to program officials, the delays in validating performance baselines were due to instability in the work to be performed, and the need to temporarily stop Block 1 design and deployment work between September 2008 and January 2009 because of DHS’s decision to redirect funds from SBInet to the physical infrastructure. Without validated baselines, DHS was not positioned to identify potential cost and schedule problems early and to take timely corrective actions to mitigate those problems. An integrated baseline review (IBR) is used to validate the performance measurement baseline. This review is intended to verify that the baseline is realistic and ensure that the contractor and the government mutually understand scope, schedule, and risks for a given task order before a substantial amount of work is performed. According to relevant guidance, establishing a complete and realistic performance measurement baseline includes (1) assigning responsibility for managing, tracking, and reporting earned value data for work performed; (2) estimating needed resources (i.e., budgets and staff), including management reserve, for performing assigned tasks; (3) defining a product-oriented description of all work to be performed; (4) scheduling all work in a time-phased sequence that reflects the duration of the program’s activities; and (5) establishing objective performance measures for each task. In validating the performance measurement baselines for the four task orders that we reviewed, the program office implemented two of the above elements, but it did not implement the other three. Specifically, for each of the six baselines associated with the task orders, the program office (1) assigned responsibility for managing, tracking, and reporting earned value data associated with each work breakdown structure element and (2) estimated a time-phased budget, including the anticipated staff needed, for each work breakdown structure element, and established a management reserve. However, as discussed in the following section, the program office did not (1) define a product-oriented description of all work to be performed, (2) reliably estimate schedule baselines, and (3) adequately measure earned value performance. Program officials attribute these limitations in establishing comprehensive baselines to instability in the nature of the work to be performed and the prime contractor’s method for determining subcontractor performance. Nevertheless, without complete and realistic baselines, the SPO has been hampered in its ability to conduct meaningful measurement and oversight of the prime contractor’s status and progress, as well as holding the contractor accountable for results. More importantly, the lack of meaningful measurement and oversight has contributed to program cost overruns and schedule delays. According to relevant guidance, a work breakdown structure deconstructs a program’s end product into successively smaller levels until the work is subdivided to a level suitable for management control. Further, a work breakdown structure should be product oriented and include all work to be performed. The work breakdown structure that was used to define each of the task order baselines was not product oriented. Instead, it was defined in terms of functions that span multiple system products, such as systems engineering, system test and evaluation, and program management. Additionally, the work breakdown structure did not reflect all work to be performed. Specifically, for four of the six performance measurement baselines, the work breakdown structure did not include all work described in the corresponding task order’s statement of work. For example, the work breakdown structure used to define the May 2008 STO baseline did not include the work associated with identifying and selecting components that meet system requirements and program security. Similarly, DCMA reported in June 2008 that the work breakdown structure included in this baseline did not account for all work identified in the system task order. A reliable schedule provides a road map for systematic execution of a program and the means by which to gauge progress, identify and address potential problems, and promote accountability. Our research has identified nine best practices associated with developing and maintaining a reliable schedule: (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying reasonable “float” between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations. The six task order baselines were not reliable because they substantially complied with only two of the eight key schedule estimating practices, and they did not comply with, or only partially or minimally complied with, the remaining six practices. (See figure 5 for a summary of the extent to which each of the baseline schedules met each of the eight practices.) Capturing all activities. The six schedules did not capture all activities defined in the task order baseline. Specifically, five of the six schedules did not reflect the work to be performed across the four task orders (i.e., integrated master schedule). Further, as previously mentioned, four of six work breakdown structures were missing elements defined in the respective task order statements of work. Moreover, two of the six schedules did not reflect all work that was defined in the work breakdown structure. For example, the December 2009 DTO schedule omitted efforts associated with design work for TUS-1 and AJO-1. Sequencing all activities. The six schedules substantially met this practice. Each of the schedules identified almost all of the predecessor and successor activities. However, each contained improper predecessor and successor relationships. For example, the May 2008 STO baseline included 52 of 538 activities (about 10 percent) with improper predecessor and successor relationships. Additionally, many activities in four of the schedules were constrained by “start no earlier than” dates. For example, as previously reported, the September 2009 baseline schedule contained 403 of 1,512 activities (about 27 percent) with “start no earlier than” constraints, which means that these activities are not allowed to start earlier than their assigned dates, even if their respective predecessor activities have been completed. Assigning resources to all activities. Two of the six schedules partially met this practice. Specifically, two schedules included resources; however, those resources were allocated to less than 15 percent of the activities identified in each schedule. Moreover, the remaining four schedules did not include estimated resources. Instead, resources for all six schedules were maintained separately as part of the contractor’s earned value system and only available to DHS upon request. Establishing the duration of all activities. Each of the six baseline schedules substantially met this practice. Specifically, each schedule established the duration of key activities and included baseline start and end dates for most of the activities. Further, reasonable durations were established for the majority of the activities in the schedules, meaning that the durations established were less than 44 days. Nevertheless, each of the schedules included activities that were not of short duration, that is, more than 44 days. For example, the April 2009 ADTO baseline included 29 of 1,009 activities with durations ranging from 45 days to 352 days. Integrating activities horizontally and vertically. Each of the schedules partially met this practice. As mentioned previously, the six schedules did not capture all activities defined in the task order baseline. Further, four of six work breakdown structures were missing elements defined in respective task order statements of work. Additionally, five of six schedules did not reflect the work performed across the four task orders (i.e., integrated master schedule), and each had improper predecessor and successor relationships. Establishing the critical path for all activities. Each of the six schedules partially met this practice. Specifically, four of six work breakdown structures were missing elements defined in the respective task order statements of work. Additionally, four of the six schedules were missing predecessor and successor activities, and each of the schedules included improper predecessor and successor relationships. Further, five of the six schedules did not reflect the work to be performed across the four task orders (i.e., integrated master schedule). Unless all activities are included and properly linked, it is not possible to generate a true critical path. Identifying reasonable float between activities. Each of the schedules identified float; however, the amount of float was excessive. For example, the February 2008 C3I/COP task order baseline included 259 of 294 activities (about 88 percent) with float greater than 100 days and 189 of the 259 (about 73 percent) with float in excess of 200 days. Conducting a schedule risk analysis. DHS did not conduct a risk analysis of any of the schedules. According to the ANSI standard for EVM systems, only work for which measurement is impractical may be classified as “level-of-effort.” Our research shows that if more than 15 percent of a program’s budget is measured using level-of-effort, then that amount should be scrutinized because it does not allow schedule performance to be measured (i.e., performance equals planned work). However, the six baselines had between 34 and 85 percent of the baseline dollar value categorized as level-of-effort, including four with more than 50 percent (see table 2). Moreover, for five of the six baselines, program documentation showed that the program office did not identify any action items during the respective IBRs related to the high use of level-of-effort. According to program officials, the STO, which categorized between 70 and 85 percent of the baseline dollar value as level-of-effort, includes many program management activities (e.g., cost, schedule, and subcontractor management). Nevertheless, they recognized that the level-of-effort for this task order was high, and in November 2009, they directed the contractor to minimize the use of level-of-effort for STO. According to program officials, the high level-of-effort was due, in part, to the prime contractor’s use of this measurement for subcontractor work. In November 2009, DCMA stated that the SPO’s use of level-of-effort activities was high, noting that this could be masking true contractor performance. If performed properly, EVM can provide an objective means for measuring program status and forecasting potential program cost overruns and schedule slippages so that timely action can be taken to minimize their impact. To do so, however, the underlying EVM data must be reliable, meaning that they are complete and accurate and all data anomalies are explained. In the case of SBInet, the EVM data provided by the prime contractor for the 21-month period ending in February 2010 have not been reliable, as evidenced by numerous and unexplained anomalies in monthly EVM reports. Reasons for the anomalies include the contractor’s use of estimated, rather than actual, costs for subcontractor work, which are subsequently adjusted when actual costs are received. Without reliable performance data, the true status of the SBInet program is unclear, thus limiting the SPO’s ability to identify potential cost and schedule shortfalls. EVM is a proven program measurement approach that, if implemented appropriately, can create a meaningful and coherent understanding of a program’s true health and status. As a result, the use of EVM can alert decision makers to potential program problems sooner than is possible by using actual versus planned expenditure alone, and thereby reduce the chance and magnitude of program cost overruns and schedule slippages. Simply stated, EVM measures the value of completed work in a given period (i.e., earned value) against (1) the actual cost of work completed for that period (i.e., actual cost) and (2) the value of the work that is expected to be completed for that period (i.e., planned value). Differences in these values are referred to as cost and schedule variances, respectively. Cost variances compare the value of the work completed with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a negative $1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the value of the work completed with the value of the work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month but was expected to complete $10 million worth of work, there would be a negative $5 million schedule variance. Positive variances indicate that activities are costing less or are being completed ahead of schedule. Negative variances indicate activities are costing more or are falling behind schedule. To determine both cost and schedule variances, all three values are necessary. According to relevant guidance, EVM data should be valid and free from unexplained anomalies (e.g., missing or negative values) because they can limit program management’s ability to identify potential cost and schedule shortfalls. Therefore, anomalies should be minimized for each of the three values—earned value, planned value, and actual cost. Moreover, all anomalies should be identified, and the reason for each should be fully explained in the monthly EVM reports. To do less limits the completeness and accuracy of these values, and thus makes the resulting variance determinations unreliable. While an industry standard for what constitutes an acceptable volume of anomalies does not exist, EVM experts in the public and private sectors that we interviewed stated that the occurrence of EVM data anomalies should be rare. Of these experts, some agreed that an anomaly should occur in no more than 5 percent of the work breakdown structure elements for a given contract or task order, while some of these advocated an occurrence percentage of no more than 1-2 percent. However, the EVM data that the prime contractor delivered to the SPO from June 2008 through February 2010 (21 months) contained numerous, unexplained anomalies. Specifically, the monthly EVM reports for all four task orders that we reviewed showed one or more anomalies (e.g., missing or negative values for earned value, planned value, and actual cost) in each of the months that had a validated performance measurement baseline. More specifically, the average number of work breakdown structure elements across the four task orders that had data anomalies during this 21-month period ranged from 11 percent to 41 percent. For the C3I/COP task order in particular, the monthly percentage of work breakdown structure elements with anomalies ranged between 25 and 67 percent over the 21 months. (See figure 6 for the percentage of work breakdown structure elements with anomalies by month for each of the four task orders.) The October 2009 STO monthly EVM report illustrates how the anomalies can distort the contractor’s performance. According to this report, about $13,000 worth of work was planned to be completed on integration and management, approximately $13,000 worth of work was performed, and actual costs were about negative $550,000. Thus, the report erroneously suggests that the contractor performed $13,000 worth of work, and actually saved about $550,000 in doing so. Similarly, the September 2009 ADTO monthly report showed that about $200,000 worth of work was planned to be completed on tower sites and infrastructure, and $25,000 worth of work was performed, but that no costs were incurred, suggesting that the work was performed for free. Exacerbating the large percentage of monthly data anomalies across the four task orders is the fact that in most cases the reasons for the anomalies were not explained in the monthly EVM variance analysis reports. Specifically, about 79 percent of all anomalies across all four task orders during the 21-month period were not explained. In particular, 82 of 119 (or about 69 percent) of all data anomalies for the STO task order were not explained in the monthly reports, and none of the anomalies were explained for DTO. (See figure 7 for the total number of data anomalies and the number that were explained in the monthly reports across the four task orders.) Program officials acknowledged problems with the EVM data and stated that they meet with the prime contractor each month to discuss the EVM reports, including the reliability of the data. According to program officials, limitations in the EVM data are due, in part, to the contractor’s use of estimated, rather than actual, costs for subcontractor work, which are subsequently adjusted when actual costs are received. Officials further stated that they have been working with the contractor to reduce the volume of unexplained anomalies, and they believe that the reliability of the data has improved since February 2010. However, program officials did not provide any documentation to support this statement. Without reliable EVM data, the program office is unable to identify actual cost and schedule shortfalls, which along with the other contractor tracking and oversight weaknesses discussed in this report, has limited its ability to effectively minimize program cost increases and schedule delays. Effective management and oversight of a program’s prime contractor is essential to successfully acquiring and deploying a system like SBInet. Integral to accomplishing this is defining and implementing a range of contractor management and oversight controls (e.g., processes and practices) that reflect relevant federal guidance and best practices. To do less increases the chances that contractor-delivered products and services will not satisfy stated requirements and will not meet customer expectations. The result is incurring the additional time and expense to redo or rework contractor deliverables, accepting products and services that do not perform as intended and do not meet mission needs, or both. Overall, DHS has not done an effective job of managing and overseeing its prime contractor, including monitoring the contractor’s performance. DHS has largely defined key management and oversight processes and practices that it should have followed, and it implemented a number of these processes and practices. However, several key management and oversight controls were not adequately defined, and essential controls were not implemented. Most significantly, DHS did not adequately document deliverable reviews and communicate the basis for rejecting certain deliverables in writing to the contractor, which contributed to deliverables that did not live up to expectations and necessitated rework and caused later problems. Further, technical reviews were not grounded in explicit criteria for determining when reviews should begin and conclude, which also contributed to contract deliverables requiring costly and time-consuming rework. In addition, the cost and schedule baselines for measuring the contractor’s performance were frequently validated too late and without sufficient accuracy and completeness to provide a meaningful basis for understanding performance, which precluded DHS from taking timely action to correct unfavorable results and trends. Compounding these serious baseline limitations was contractor-provided data about actual performance that were replete with unexplained anomalies, thus rendering the data unfit for effective contractor management and oversight. Notwithstanding of a number of contractor management and oversight definition and implementation efforts that DHS executed well, such as defining key processes and practices and training key staff, these above-cited weaknesses collectively mean that DHS’s management and oversight of its prime contractor has been a major contributor to the SBInet program’s well-chronicled history of not delivering promised system capabilities on time and on budget. These limitations can be attributed to a number of factors, including gaps in how certain processes and practices were defined, as well as not enforcing other processes and practices that were defined and applicable and not taking sufficient time to review deliverables that were submitted late. The limitations can be further attributed to the fact that SBInet has from its outset lacked clear definition and stability, and thus experienced continuous change in scope and direction—an issue that we have previously reported and made recommendations to address. Collectively, these factors have helped to create a contractor management and oversight environment, which, when combined with the many other acquisition management weaknesses that we have previously reported about and made recommendations to address, have produced a program that to date has not been successful, and if not corrected, can become worse. To improve DHS management and oversight of the SBInet prime contractor, we recommend that the Secretary of Homeland Security direct the Commissioner of the U.S. Customs and Border Protection to have the SBI Executive Director, in collaboration with the SBInet Program Director, take the following four actions: Revise and implement, as applicable, contractor deliverable review processes and practices to ensure that (1) contractor deliverables are thoroughly reviewed and are not constrained by late contractor deliverables and imposed milestones, (2) the reviews are sufficiently documented, and (3) the acceptance or the rejection of each contractor deliverable is communicated in writing to the contractor, to include explicit explanations of the basis for any rejections. Ensure that applicable entry and exit criteria for each technical review are used and satisfied before initiating and concluding, respectively, a given review. Establish and validate timely, complete, and accurate performance measurement baselines for each new task order or major modification of an existing task order, as appropriate, to include, but not be limited to, ensuring that (1) the work breakdown structure includes all work to be performed, (2) baseline schedules reflect the key schedule estimating practices discussed in this report, and (3) level-of-effort performance measurement in excess of 15 percent is scrutinized, justified, and minimized. Ensure that all anomalies in contractor-delivered monthly earned value management reports are identified and explained, and report periodically to DHS acquisition leadership on relevant trends in the number of unexplained anomalies. Because we have already made recommendations in prior reports to address the other management and oversight weaknesses discussed in this report, such as those related to requirements management, risk management, and Systems Engineering Plan implementation, we are not making any additional recommendations at this time. In written comments on a draft of this report, signed by the Director, Departmental GAO/OIG Liaison Office and reprinted in appendix II, DHS agreed with our four recommendations and described actions under way or planned, which we summarize below, to address them. With respect to our recommendation to revise and implement the contractor deliverable review process, DHS stated that it is updating the process to require written documentation of each review and the communication to the contractor of review results. With respect to our recommendation to ensure that entry and exit criteria are used to initiate and conclude each technical review, DHS stated that it has established an SBI Systems Engineering Directorate to focus on technical oversight of the acquisition process, adding that the Directorate is developing a technical review guide that describes in detail the review process and the relevant entry and exit criteria for each technical review. With respect to our recommendation to establish and validate timely, complete, and accurate performance measurement baselines, DHS stated that it is mindful of the need to establish and maintain current performance baselines, and to plan and implement baseline updates as completely and promptly as practicable, which it indicated is done through IBRs. DHS also noted that while scheduling practices remain a challenge, it continues to make improvements to its process, including implementing scheduling tools and templates. With respect to our recommendation to identify and explain all anomalies in monthly EVM reports, and to report periodically relevant trends to DHS acquisition leadership, DHS acknowledged the need to correctly document anomalies in the monthly EVM reports, and stated that it is working with DCMA to improve contractor quality control issues and the content of the monthly EVM reports. It also stated that it is augmenting the reports with routine conversations between contractor and project management staff. The department also committed to advising the appropriate acquisition leaders through established reporting and oversight opportunities as issues arise with contractor performance or reporting. Notwithstanding its agreement with our recommendations, the department also commented that it took exception to selected findings and conclusions regarding the program’s implementation of EVM. A summary of DHS’s comments and our responses are provided below. The department stated that it took exception to our finding that it did not ensure performance measurement baselines were validated in a timely manner, and said that it was not accurate to conclude that the lack of validated baselines precluded the program office from identifying cost and schedule problems and taking corrective action. In support of these positions, the department made the following three points, which our response addresses. First, the department stated that the SBInet program office delayed formal IBRs until it had finalized negotiated modifications to the task orders, and in doing so, was able to complete an IBR within 90 days of each major task order modification. We do not question whether the program office held IBRs within 90 days of final negotiation of major task order modifications. Our point is that DHS did not validate task order performance measurement baselines (i.e., hold IBRs) within 90 days of task order award, which is what DHS guidance states should occur. As our report states, the program office only met this 90-day threshold for two of the six baselines that we reviewed. Further, the length of time to validate the performance baselines for the four task orders far exceeded 90 days (5 to 10 months), during which time DHS reports show that significant work was performed and millions of dollars were expended. In fact, the DHS reports show that most of the planned work for some of these task orders had already been performed by the time the IBR was held and the baseline was validated. As we state in our report, and DHS acknowledged in its comments, the purpose of an IBR is to verify that the performance baseline is realistic and that the scope, schedule, and risks are mutually understood by the contractor and the government before a substantial amount of work is performed. Second, DHS commented that the program office maintained what it referred to as “interim” performance measurement baselines during the period of major program scope, schedule, and budget changes. We acknowledge that in some cases the program office had these “interim” baselines. However, these baselines are the contractor-provided baselines, meaning that the program office and the contractor had not mutually agreed to the scope, schedule, and risks associated with the work to be performed. Moreover, for two of the task orders, the program office did not have an “interim” baseline, even though the contractor performed significant work under these task orders. Third, the department stated that program leadership reviewed the contractor’s technical and financial performance information relative to performance measurement baselines and implemented management actions as needed. We do not question whether program leadership reviewed contractor-provided performance information or whether actions to address problems may have been taken. However, our report does conclude, as is discussed later in this section, that the combination of the EVM weaknesses that our report cites, to include unreliable performance baselines and contractor-provided performance data, did not allow the program office to identify performance problems early and to take timely actions to avoid the well-documented schedule delays and cost increases that the program has experienced. The department expressed two concerns with how it said our report characterized and quantified EVM anomalies. First, the department stated that our report failed to distinguish between factual errors and legitimate monthly accounting adjustments. We agree that our report does not distinguish between the two types of anomalies, and would add that this was intentional because making the distinction was not relevant to our finding. Specifically, our finding is that the reasons for the numerous anomalies were not explained in the monthly EVM variance analysis reports, therefore making the true status of the program unclear. Second, DHS stated that we incorrectly concluded that both errors and adjustments are problematic, distort cost performance, and limit management insight. In response, we did not conclude that all errors and adjustments have these impacts, but rather that the lack of explanation associated with such a large volume of anomalies made the true status of the program unclear, thus limiting the program office’s ability to identify actual cost and schedule shortfalls, which is certainly problematic. Further, our report cites examples of cost performance data that provide a distorted picture of actual performance vis-à-vis expectations. Accordingly, the correct characterization of the report’s conclusion concerning the reliability of EVM data is that the lack of explanation of the numerous anomalies in monthly reports is problematic, provides a distorted picture of cost performance, and limits management insight. To this very point, DHS acknowledged in its comments the importance of explaining the reason for anomalies in the monthly variance reports, regardless of whether they are due to factual errors or accounting adjustments. The department stated that it took exception to our conclusion that the program office’s lack of validated baselines in particular, and EVM shortcomings in general, contributed to cost and schedule growth and made it unable to identify cost and schedule problems early and take corrective actions to avoid them. In response, we did not conclude that the lack of validated baselines alone had either of these impacts. However, we did conclude that the collection of EVM weaknesses discussed in our report, to include untimely validated baselines, incomplete and unreliable baselines, and unreliable performance data, together precluded the program office from identifying problems early and taking corrective action needed to avoid the program’s well-chronicled history of schedule delays and cost increases. In support of this conclusion, we state in the report, for example, that the performance measurement baselines that we reviewed understated the cost and time necessary to complete the work because they did not capture all work in the task orders’ statements of work and because they were not grounded in a range of scheduling best practices. Given that cost and schedule growth is a function of the baseline against which actual cost and schedule performance is measured, it follows logically that an understated baseline would produce actual cost overruns and schedule delays. In addition, we would note that beyond these EVM shortcomings, our report also recognizes other contract tracking and oversight, test management, and requirements management weaknesses that have collectively contributed to the program’s cost, schedule, and performance shortfalls. In addition to the above points, DHS provided technical comments, which we have incorporated in the report as appropriate. We are sending copies of this report to the Chairmen and Ranking Members of the Senate and House Appropriations Committees and other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We will also send copies to the Secretary of Homeland Security, the Commissioner of U.S. Customs and Border Protection, and the Director of the Office of Management and Budget. In addition, the report will be available at no cost on the GAO Web site at http://www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at hiter@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to determine the extent to which the Department of Homeland Security (DHS) (1) has defined and implemented effective controls for managing and overseeing the Secure Border Initiative Network (SBInet) prime contractor and (2) is effectively monitoring the prime contractor’s progress in meeting cost and schedule expectations. To accomplish our objectives, we focused on four key task orders—the Arizona Deployment Task Order, the Design Task Order, the System Task Order, and the Command, Control, Communications, and Intelligence/ Common Operating Picture Task Order—that are integral to the design, development, and deployment of the first increment of SBInet, known as Block 1. We also focused on the SBInet System Program Office’s (SPO) contracting and oversight activities that occurred from June 2008 through February 2010. To determine the extent to which DHS has defined and implemented effective controls for managing and overseeing the SBInet prime contractor, we focused on training, verifying and accepting contract deliverables, conducting technical reviews, and conducting management reviews. We chose these four because each is important to tracking and overseeing contractor performance for the four task orders. Training. We compared relevant DHS contractor management and oversight training requirements to the program managers’, contracting officers’, and contracting officer’s technical representatives’ training certifications. Verifying and accepting contract deliverables. We compared U.S. Customs and Border Protection’s (CBP) process for verifying and accepting contract deliverables to leading industry practices to identify any variances. We then assessed a nonprobability, random sample of 28 contract deliverables that the SPO identified as being delivered between June 1, 2008, and August 31, 2009. We also judgmentally selected one additional review that was conducted between September 1, 2009, and February 28, 2010. For each of the 29 deliverables, we reviewed relevant documentation, such as the contract deliverables, review comment forms, and documented communications with the prime contractor indicating acceptance or rejection of the deliverable, and compared them to the CBP process and leading industry practices to determine what, if any, deviations existed. Technical reviews. We compared relevant DHS and CBP guidance and entry and exit criteria in the task order Data Item Descriptions to leading industry practices to identify any variances. We assessed a nonprobability, random sample of technical reviews. Specifically, we assessed a technical review from each of the eight unique combinations of task orders and review types held between June 1, 2008, and August 31, 2009. We also judgmentally selected one additional review that was conducted between September 1, 2009, and February 28, 2010. For each of the nine reviews, we compared the package of documentation prepared for and used during these reviews to the criteria defined in the relevant task orders to determine the extent to which the reviews satisfied the criteria. Management reviews. We compared relevant CBP guidance to leading industry practices to identify any variances. We then compared relevant documentation prepared for and used during monthly joint program management reviews to determine the extent to which the reviews addressed cost, schedule, and program risks. We also assessed a nonprobability sample of 11 action items that were identified during the reviews held from October 2008 through October 2009, and assessed relevant documentation to determine the extent to which they were tracked to closure. To determine the extent to which DHS is effectively monitoring the prime contractor’s progress in meeting cost and schedule expectations, we focused on the program’s implementation of earned value management (EVM) because it was the tool used to monitor the contractor’s cost and schedule performance. Specifically, we analyzed the six performance measurement baselines, and the associated integrated baseline review documentation, such as briefings, the work breakdown structure (WBS) governing all task orders, task order statements of work, schedules, monthly contract performance reports, control account plans, and responsibility assignment matrixes. In doing so, we compared this documentation to EVM and scheduling best practices as identified in our Cost Estimating and Assessment Guide. Specifically, for each of the six baselines: We reviewed control account plans and responsibility assignment matrixes to determine the period of performance and scope of work for each baseline, compared the work described in the respective task order statements of work to the work described in the responsibility assignment matrix, and reviewed the control account plans to determine the extent to which the level-of-effort measurement method was to measure contractor performance. We analyzed the schedule presented at each baseline review against eight key schedule estimating practices in our Cost Estimating and Assessment Guide. In doing so, we used commercially available software tools to determine whether each schedule, for example, included all critical activities, a logical sequence of activities, and reasonable activity durations. Further, we characterized the extent to which the schedule met each of the practices as either not met, minimally met, partially met, substantially met, or met. We analyzed the contract performance reports for each of the four task orders for each month that there was a validated baseline. Specifically, we identified instances of the following: (1) negative planned value, earned value, or actual cost; (2) planned value and earned value without actual cost; (3) earned value and actual cost without planned value; (4) actual cost without planned value or earned value; (5) earned value without planned value and actual cost; (6) inconsistencies between the estimated cost at completion and the planned cost at completion; (7) actual cost exceeding estimated cost at completion; and (8) planned or earned values exceeding planned cost at completion. To determine the number of anomalies, we identified each WBS element that had one or more of the above anomalies. Then, we identified the number of WBS elements at the beginning and the end of the baseline period of performance, and calculated the average number of WBS elements. We used this to determine the percentage of WBS elements with anomalies for each task order and for each month for which there was a validated baseline. To support our work across this objective, we interviewed officials from the Department of Defense’s Defense Contract Management Agency (DCMA), which provides contractor oversight services to the SPO, including oversight of EVM implementation, and prime contractor officials. We also reviewed DCMA monthly status reports and corrective action reports. For both objectives, we interviewed program officials to obtain clarification on the practices, and to determine the reasons for any deviations. To assess the reliability of the data that we used to support the findings in this report, we reviewed relevant program documentation to substantiate evidence obtained through interviews with knowledgeable agency officials, where available. We determined that the data used in this report are sufficiently reliable. We have also made appropriate attribution indicating the sources of the data. We performed our work at the CBP headquarters and prime contractor facilities in the Washington, D.C., metropolitan area and with DCMA officials from Huntsville, Alabama. We conducted this performance audit from June 2009 to October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Deborah A. Davis (Assistant Director), Tisha Derricotte, Neil Doherty, Kaelin Kuhn, Lee McCracken, Jamelyn Payan, Karen Richey, Matt Snyder, Sushmita Srikanth, Stacey Steele, and Matthew Strain made key contributions to this report.
The Department of Homeland Security's (DHS) Secure Border Initiative Network (SBInet) is to place surveillance systems along our nation's borders and provide Border Patrol command centers with the imagery and related tools and information needed to detect breaches and make agent deployment decisions. To deliver SBInet, DHS has relied heavily on its prime contractor. Because of the importance of effective contractor management and oversight to SBInet, GAO was asked to determine the extent to which DHS (1) defined and implemented effective controls for managing and overseeing the prime contractor and (2) effectively monitored the contractor's progress in meeting cost and schedule expectations. To do this, GAO analyzed key program documentation against relevant guidance and best practices, and interviewed program officials. DHS has largely defined but has not adequately implemented the full range of controls that is reflected in relevant guidance and related best practices and is needed to effectively manage and oversee its SBInet prime contractor. To the department's credit, it has defined a number of key policies and procedures for verifying and accepting contract deliverables and conducting technical reviews, such as the criteria that need to be met before commencing and concluding a critical design review. Moreover, it has implemented some of these defined practices, such as those associated with training key contractor management and oversight officials. However, DHS has not effectively implemented other controls. For example, it has not adequately documented its review of contract deliverables and communicated to the prime contractor the basis for rejecting certain deliverables. Further, it has not ensured that key documentation satisfied relevant criteria before concluding key technical reviews. These weaknesses can be attributed in part to limitations in the defined verification and acceptance deliverable process, a program office decision to exclude certain deliverables from the process, and insufficient time to review technical review documentation. All told, DHS has not effectively managed and overseen its SBInet prime contractor, thus resulting in costly rework and contributing to SBInet's well-chronicled history of not delivering promised capabilities and benefits on time and within budget. DHS has not effectively monitored the SBInet prime contractor's progress in meeting cost and schedule expectations. While DHS has used earned value management (EVM), which is a proven management approach for understanding program status and identifying early warning signs of impending schedule delays and cost overruns, it has not ensured that its contractor has effectively implemented EVM. In particular, DHS has not ensured that validated performance baselines (the estimated value of planned work against which performance is measured) were timely, complete, and accurate. For example, the contractor was allowed to perform work on task orders for several months without a validated baseline in place. Further, not all work to be performed was included in the baselines that were eventually established, and the schedules for completing this work did not substantially comply with six of the eight key practices that relevant guidance states are important to having a reliable schedule. Also, DHS regularly received incomplete and anomalous EVM data from the prime contractor, which it had to rely on to measure progress and project the time and cost to complete the program. As a result, DHS has not been able to gain meaningful and proactive insight into potential cost and schedule performance shortfalls, and thus take corrective actions to avoid shortfalls in the future. Program officials attributed these weaknesses in part to the instability in the scope of the work to be performed, and the contractor's use of estimated, rather than actual, costs for subcontractor work and the subsequent adjustments that are made when actual costs are received. This inability has contributed to the program's failure to live up to expectations and to it costing more and taking longer than was necessary. GAO is making recommendations to DHS aimed at revising and implementing policies and procedures related to contractor deliverables and technical reviews, and improving EVM baselines and data. DHS agreed with GAO's recommendations and described actions to address them, but took exception with selected findings and conclusions regarding EVM implementation. GAO stands by these findings and conclusions for reasons discussed in the report.
The following programs are authorized under Title IV of the Higher Education Act, as amended: Pell grants—grants to undergraduate students who are enrolled in a degree or certificate program and have federally defined financial need. Stafford and PLUS loans—these loans may be made by private lenders and guaranteed by the federal government (guaranteed loans) or made directly by the federal government through a student’s school (direct loans). Subsidized Stafford loans—loans made to students enrolled at least half-time in an eligible program of study and have federally defined financial need. The federal government pays the interest costs on the loan while the student is in school. Unsubsidized Stafford loans—non-need-based loans made to students enrolled at least half-time in an eligible program of study. Although the terms and conditions of the loan (i.e., interest rates, etc.) are the same as those for subsidized loans, students are responsible for paying all interest costs on the loan. PLUS loans—non-need-based loans made to credit worthy parents of dependent undergraduate students enrolled at least half-time in an eligible program of study. Borrowers are responsible for paying all interest on the loan. Dependent students may borrow combined subsidized and unsubsidized Stafford loans up to $2,625 in their first year of college, $3,500 in their second year, and $5,500 in their third year and beyond. Independent students and dependent students without access to PLUS loans can borrow combined subsidized and unsubsidized Stafford loans up to $6,625 in their first year, $7,500 in their second year, and $10,500 in their third year and beyond. There are aggregate limits for an entire undergraduate education of $23,000 for dependent students and $46,000 for independent students or dependent students without access to PLUS loans. Campus-based aid—participating institutions receive separate allocations for three programs from Education. The institutions then award the following aid to students: Supplemental Educational Opportunity Grants (SEOG)—grants for undergraduate students with federally defined financial need. Priority for this aid is given to Pell grant recipients. Perkins loans—low-interest (5 percent) loans to undergraduate and graduate students. Interest does not accrue while the student is enrolled at least half-time in an eligible program. Priority is given to students who have exceptional federally defined financial need. Students can borrow up to $4,000 for any year of undergraduate education with an aggregate limit of $20,000. Work-study—on- or off-campus jobs in which students who have federally defined need earn at least the current federal minimum wage. The institution or off-campus employer pays a portion of their wages. The amount of nonfederal financial aid has been increasing faster than the amount of federal grants for financial aid while the amount of federal loans for financial aid has increased the most. As figure 1 shows, from 1991-92 to 2001-02, the total financial aid awarded from nonfederal grants more than doubled, while the amounts from federal grant programs increased much more slightly. During this time, the amount of aid borrowed through federal loan programs nearly doubled. Growth in the amount borrowed through nonfederal loans from 1995-96 to 2001-02 also rose, but it remains the smallest source of the four categories. As a result of increasing reliance on loans to pay college costs, there is growing concern about the level of loan debt students are accumulating. The median cumulative amount borrowed from all loan sources for graduating seniors increased (in constant 2001 dollars) from $9,800 in 1992-93 to $18,000 in 1999-2000. Even though income of graduates may have increased over the same period, some analysts have expressed concern about the increased reliance on the use of loans in lieu of other options for financing a college education, such as resources the student and family already have. Education is responsible for, among other things, formulating the federal postsecondary education policy, overseeing federal investments in support of students enrolled in postsecondary education, and managing the distribution of Title IV funds. Part of its role in fulfilling these responsibilities is to ensure that Title IV funds are used effectively. Education has established a performance indicator of maintaining borrower indebtedness and average borrower payments for federal student loans at less than 10 percent of borrower income in the first year of repayment. This indicator was established based on the belief that an educational debt burden of 10 percent of income or higher will negatively affect a borrower’s ability to repay his or her student loans. Schools are responsible for determining individual students’ eligibility for specific sources of financial aid and compiling these sources to meet each student’s need—a process known as packaging. Part of this process involves deciding which types or sources of aid should be awarded first— for example, grants or loans, federal or nonfederal aid, need-based or non- need-based aid. Another factor to consider in packaging aid is whether to reduce aid from any source in a student’s package to offset an aid award from another source. Such a reduction might be done, for example, when a student who has been awarded a significant amount of need-based aid subsequently obtains a substantial non-need-based aid award from a source outside of their school’s financial aid office. In school year 1999-2000, an estimated 732,000 out of 3.4 million full- time/full-year federal aid recipients (22 percent) received $2.96 billion in financial aid greater than their federally defined financial need, either because they or their parents received substitutable loans or because they received nonfederal financial aid, such as scholarships, in addition to federal aid. Figure 2 shows how the number of aid recipients receiving aid greater than their federally defined need compares to the total number of financial aid recipients. Figure 3 shows how the amount of aid greater than federally defined need compares to total aid received. Of all federal aid recipients, about 19 percent (628,000) received total financial aid that was greater than their federally defined need solely as a result of receiving substitutable loans. We estimate this to be $2.72 billion with an average amount of about $4,300. These students received aid that was greater than their federally defined need because, under the Higher Education Act, students and their families can borrow substitutable loans—unsubsidized Stafford and PLUS loans—to offset the amount of their expected family contribution (provided they do not exceed the annual and cumulative borrowing limits established for these programs). The way that schools package student financial aid could contribute to students receiving substitutable loans that increase their aid beyond their federally defined need. For example, of the 12 schools that provided information on their aid packaging practices, 7 automatically package substitutable loans for students that are greater than their federally defined need while 5 require that a student or family who wishes to obtain such a loan apply for it. Another 3 percent of federal aid recipients (104,000) received aid that was greater than their federally defined need as a result of receiving nonfederal aid in addition to their federal aid. We estimate this to be a total of $238 million with an average of about $2,300. This group of students continued to have aid greater than their federally defined need even after any substitutable loans they received were accounted for. Further, there was no pattern among these students in terms of the sources from which they received their financial aid, except that the majority received unsubsidized Stafford loans. (See table 1.) In addition, there was no pattern in terms of the types of schools they attended. The lack of any such pattern may be due to factors not captured in NPSAS data, such as the sequence in which financial aid was packaged. While we did not identify any common patterns or characteristics associated with students receiving aid greater than their federally defined need as a result of combinations of federal and nonfederal aid, there are a number of possible reasons why this may occur: In limited circumstances, students who receive Title IV assistance are allowed to receive aid that is greater than their federally defined need. In the first situation, schools cannot reduce the amount of a Pell grant even if it results in a student receiving aid greater than federally defined need. We found, however, that only 17 percent of the students in this group received Pell grants. Also, if aid greater than federally defined need is $300 or less, campus-based assistance does not need to be reduced and subsidized Stafford loans do not need to be reduced if the student is also receiving federal work study. Finally, after any Stafford loan funds have been delivered to the student, the student is allowed to receive aid from a non- Title IV source, even if that aid results in aid greater than federally defined need. This could, for example, explain some of the 39 percent of students in this group who received subsidized Stafford loans. In some cases, rules for nonfederal assistance can increase the likelihood of students receiving aid greater than their federally defined need from sources such as private scholarships. Benefactors of private scholarships may sometimes prohibit schools from reducing the amount of the scholarship even if a student’s total aid package will be greater than their federally defined need. Also, several of the schools that provided information to us specifically cited students who receive both Pell grants and state, merit, or athletic scholarships that are greater than their federally defined need as cases in which they would not reduce total aid in order to stay within their federally defined need. We found that, among the students whose aid was greater than federal need due to combinations of federal and nonfederal aid, less than 5 percent received both Pell grants and state or institutional merit scholarships or athletic scholarships. Some schools—primarily private 4-year institutions—use different factors than those used by the federal government to determine eligibility for institutional need-based aid. These need formulas, known as institutional methodologies, may identify a higher level of need for a student than the federal government would. However, schools that use institutional methodologies must still use the federal definition of need to award federal need-based aid. By filling this higher level of need from aid sources that are not counted towards federally defined need, the student could receive more aid than his/her federally defined need would dictate. NPSAS does not capture whether any nonfederal need-based aid was distributed using these institutional methodologies. Under Title IV, financial aid officers have discretion to recalculate a student’s need if the family’s financial circumstances change dramatically, such as a parent’s loss of employment. This discretion, known as professional judgment, could result in an increase to a student’s financial need. NPSAS does not capture whether a student’s aid package was adjusted due to professional judgment. However, the 12 schools that provided information to us generally said they changed aid awards as a result of professional judgment for 5 percent or fewer of federal aid recipients. These cases describe situations under which federal aid recipients may legitimately receive more financial aid than their federally defined need. While each of the situations described provides a plausible explanation of how a combination of federal and nonfederal aid can raise overall aid above federally defined need, we cannot determine with certainty, without looking in detail at each case, why these aid recipients received more aid than their federally defined need. Compared to those federal aid recipients who did not receive aid greater than their federally defined need, the 732,000 who did were more likely to have higher family incomes, be dependent, or attend public universities. They are also more likely to have higher grade point averages or attend schools in the Southwest or Plains states. Among those variables that proved statistically significant, table 2 shows selected student and school characteristics that were associated with receiving aid greater than federally defined need. Appendix II more fully describes all of the variables used in our analysis and more completely discusses their levels of statistical significance. These patterns generally held regardless of whether the aid greater than federally defined need could be attributed to substitutable loans or a combination of federal and nonfederal aid. The one exception we found was that students who received aid greater than their need as a result of a combination of federal and nonfederal aid were more likely to be white. In addition, these 732,000 students were more likely to have financial aid packages consisting mostly of non-need-based federal aid or nonfederal aid. Among those variables that proved statistically significant, table 3 shows selected financial aid package characteristics that were associated with receiving aid greater than federally defined need. (See app. II for a more complete description of variables and their significance levels.) Based on NPSAS data alone, we cannot say why the characteristics listed in tables 2 and 3 are associated with a greater likelihood of receiving aid greater than federally defined need. Changing the Higher Education Act to limit the receipt of aid that is greater than students’ federally defined financial need is not likely to achieve significant federal savings. However, the use of substitutable loans could increase overall student indebtedness. Any cost savings from changing the Higher Education Act to limit the receipt of aid that is greater than students’ federally defined financial need would likely be very modest, much less than the dollar amount of such aid—the $2.96 billion. In the case of the larger group of students and their families whose aid greater than federally defined need is attributable to substitutable loans, the actual cost to the government is not the face value of the loans. For guaranteed loans, the government incurs costs—primarily insurance claims payments to lenders for defaulted loans and special allowance payments made to lenders to ensure a guaranteed return on the loans they make. For direct loans, interest from loan repayments offsets costs the government incurs for defaults and interest payments to the treasury on funds Education borrows to make loans. These interest earnings produce savings for the government. Determining the net cost of federal substitutable loans would require comparing savings generated by direct loans with the net costs associated with guaranteed loans. We could not estimate these costs given the data available in NPSAS. For the smaller group of cases involving combinations of federal and nonfederal aid, any savings would depend on how aid is packaged. Assuming that most schools package loans and work study last—8 of the 12 schools that provided us with information said this was the typical practice at their institutions—loans and work study would most likely be eliminated first to keep aid packages within federally defined need limits. Any savings on loans would be derived using the same basic calculation we described above for substitutable loans. In addition, the government would also save the interest it pays on subsidized loans while students are still in school. Thus, these savings would be considerably smaller than the face value of the loan. With regard to work study, it is likely that schools rather than the federal government would obtain most of the savings. According to our analysis of the NPSAS data, this would occur because a larger percentage of these students received institution-funded work study rather than federally funded work study (29 percent versus 12 percent). Although changing the Higher Education Act to limit receiving aid greater than federally defined need is not likely to result in any substantial cost savings, continuing this practice may affect some students’ loan indebtedness. The one fifth of federal aid recipients who received substitutable loans may face higher monthly loan repayments that might constrain their other financial choices. In addition, as student loan indebtedness rises, borrowers could experience difficulty in meeting their monthly payments, particularly under weak economic conditions. The widespread use of substitutable loans might also affect Education’s ability to help students and their families maintain their loan indebtedness at manageable levels. Officials at Education told us that the agency is committed to tracking overall student debt burden. However, the 19 percent of students and their families who borrowed substitutable loans may have higher monthly repayments and spend a larger share of their income on loan repayments than other students. This could increase the average debt burden of these students above that of other students. While students and their families have a range of options for paying for college, the money students borrow could influence their later debt burden. Given Education’s performance indicator of maintaining borrower indebtedness at less than 10 percent of income in the first year of repayment, this relationship should be of interest to the agency. Education may find it more difficult to meet this standard if indebtedness continues to grow through the use of substitutable loans. Such information might prove useful to help inform federal policymakers on how best to minimize student indebtedness. To ensure that the use of substitutable loans will not lead to unmanageable student loan indebtedness, we recommend that the Secretary of Education, over time, monitor the impact of substitutable loans on student debt burden and, if debt burden associated with substitutable loans rises substantially, develop and propose alternatives for the administration or Congress to consider to help students manage student loan debt burden. Such alternatives could range from shifting students into repayment plans that would lower their debt burden to limiting the use or amount of substitutable loans. In written comments on a draft of this report, Education agreed that student indebtedness is of concern, however, Education disagreed with what we included in our analysis. Specifically, we included PLUS loans to a student’s family—usually to the parents—as part of our analysis. Education stated that by including these loans we were mischaracterizing student debt and that loans to families should be excluded from our analysis. Education also stated that we should distinguish between students and their families as the recipients of federal financial aid. We modified the report to more clearly detail when non-students were responsible for the loans. However, based on the 1999-2000 NPSAS data, 1.3 million federal aid recipients received unsubsidized Stafford loans while 323,000 federal aid recipients received PLUS loans, indicating to us that far more substitutable loans are likely to be made to students. Education also had technical comments, which were incorporated when appropriate. See appendix III for a printed copy of Education’s comments. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Health, Education, Labor and Pensions and the House Committee on Education and the Workforce; and the Director of the Office of Management and Budget. We will also make copies available to others on request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-8403. Major contributors are listed in appendix IV. The objectives of this study were to determine how often students who were federal financial aid recipients received aid that was greater than their federally defined financial need, identify the student, school, and financial aid package characteristics associated with receiving such aid, and determine what the implications might be, if any, of changing the Higher Education Act to limit the receipt of aid that is greater than a student’s federally defined need. When students receive financial aid from multiple sources or some aid that is not need-based, the potential exists for some students to receive aid that is greater than their federally defined need. Most Title IV aid is based on a student’s federally defined financial need, which is the difference between the student’s cost of attendance and the family’s federally determined ability to pay these costs—known as the expected family contribution (EFC). To meet their EFC under Title IV, families can obtain non-need-based loans, which we refer to as substitutable loans. To carry out our objectives, we used the National Postsecondary Student Aid Study (NPSAS) data collected by the Department of Education’s National Center for Education Statistics. We also contacted 19 college and university financial aid officers to obtain information on their schools’ financial aid packaging policies and practices. We received responses from 12 of these officials. To determine the extent to which students received financial aid greater than their federally defined need, we analyzed the NPSAS data to identify the amount and source of financial aid received by full-time, full-year undergraduates who received aid from any federal source whether or not it was a Title IV program. We identified two distinct groups of students who received aid greater than their federally defined need. We first identified all students who received aid greater than the federally defined need, regardless of the source of that aid (see block A in fig. 2). The first group of students were those whose aid greater than federally defined need was accounted for by the substitutable loans they received (see block A-1 in fig. 2). The second group of students were those whose aid still remained greater than their federally defined need, after accounting for any substitutable loans in their aid packages (see block A-2 in fig. 2). To determine what student and school characteristics were associated with receiving aid greater than federally defined need, we again used NPSAS data for our analysis. For all of the students receiving aid greater than federally defined need and the second group of these students, we employed logistic regression models to estimate the association between student and school characteristics and the likelihood of receiving aid greater than federally defined need. We chose logistic models due to the dichotomous nature of the phenomenon of interest—whether or not students received aid greater than their federally defined need. We did not perform a similar analysis on the first group because, since it was such a large portion of the students receiving aid greater than their federally defined need and since the aid that was greater than need could be attributed entirely to receiving substitutable loans, it was not likely the results would show this group to be different in any other ways. The variables that we used are listed and defined in appendix II. In general, we included student characteristics such as race, marital status, and dependency status. We included such school characteristics as graduation rate, geographical location, and whether or not a school was public. We also included some characteristics of the aid packages the students received. To report the results of the regressions, we use odds ratio tables. (See app. II.) Some variables proved not to have a statistically significant association with receiving aid greater than federally defined need. For all of the students receiving aid greater than their federally defined need, these included whether the student was white, the graduation rate of the school, and if the majority of a student’s aid package was composed from federal need-based loans. For the second group, whether the student was a veteran, was a U.S. citizen, and whether the majority of aid was received from federal need-based loans or nonfederal loans were statistically insignificant. In analyzing the results for the second group of students, we sought to determine if a large proportion of these students had characteristics in common, such as receiving aid from specific programs or attending schools with a certain common characteristic (e.g., public versus private, regional location). We did not undertake any further analysis to identify how these students received aid that was greater than their federally defined need. This would have entailed individually analyzing each of the over 400 cases and obtaining additional information directly from the school. This analysis would have been beyond the scope of our review. To analyze the student and school characteristics that are associated with receiving aid greater than federally defined need, we ran logistic regressions from variables in the 1999-2000 NPSAS. We sought to determine which student, school, and aid package characteristics were significantly associated with the receipt of aid greater than the federally defined need. We included variables representing dependency status, grade point average (GPA), region of the country, race, veteran status, income, a private college indicator, the source of the majority of the student’s aid, and the number of different aid source in the aid package. The results for the models we used are odds ratios that estimate the relative likelihood of receiving aid greater than federally defined need for each factor. Table 4 shows these odds ratios for all students receiving aid greater than their federally defined need (see blocks A-1 and A-2 in fig. 2). Table 5 shows the results for the students whose aid greater than their need could not be attributed entirely to receiving substitutable loans (see block A-2 in fig. 2). If there were no significant differences between those who received aid greater than federal need and those who did not with regard to a particular characteristic then the odds ratio would be 1.00. The more the odds ratio differs from 1.00 in either direction, the larger the effect. The odds ratios were generally computed in relation to a reference group; for example, if the odds ratio refers to being a dependent student, then the reference group would be independent students. Some variables, such as GPA and income, are continuous in nature. In these cases, the odds ratio can be interpreted as representing the increase in the likelihood of receiving aid greater than federally defined need given a 1 unit increase in the continuous variable. An odds ratio greater than 1.00 indicates an increase in the likelihood of receiving aid greater than the federally defined need relative to the reference group, whereas an odds ratio less than 1.00 indicates a decrease in the likelihood of receiving aid greater than the federally defined need relative to the reference group. Both tables also include the 95 percent confidence intervals around the odds ratios. If these intervals contain 1.00, then the difference is not statistically significant. Table 6 shows the means for all the variables considered. Dependent. All else equal, dependents are more than twice (2.62) as likely to receive aid greater than federally defined need. Currently Married. All else equal, being currently married (as opposed to single or separated, divorced or widowed) increased the likelihood of receiving aid greater than federally defined need by a factor of 1.63. Veteran. Veterans were almost twice as likely (1.83) than nonveterans to receive aid greater than the federally defined need. Household Size. As the size of a household increases, the likelihood of receiving aid greater than the federally defined need decreases. For example, a student from a two-member household is 1.4 times more likely to receive aid greater than federally defined need than a person from a three-member household. GPA. GPA is usually calculated on a 4-point scale. In the NPSAS data set, GPA is multiplied by 100 or reported on a 400-point scale. In our analysis, we have GPA ranging from 0 to 40, such that a unit increase in our GPA variable (say from 37 to 38) represents a 0.1 change in grade point average as it is usually calculated (3.7 to 3.8). An odds ratio of 1.03 should thus be interpreted as follows: On a 4-point GPA scale, increasing GPA by one- tenth of one point (2.53 to 2.63) increases the likelihood of receiving aid greater than federally defined need by 3 percent. Thus, a change of one grade point (2.5 to 3.5) increases the likelihood of receiving aid greater than federally defined need by 35 percent (1.0310 = 1.35). Income. For every $5,000 change in income, the probability of receiving aid greater than federally defined need increases by 24 percent. A person earning $50,000 more than another, all else equal, is 8.6 (1.2410 = 8.59) times more likely to receive aid greater than federally defined need. Private Not-for-profit University. Attending a private university decreases the likelihood of receiving aid greater than federally defined need. Someone who attends a public university increases his/her chances of receiving aid greater than federally defined need by a factor of 2.86 (1/0.35). Four-year School. All else equal, attending a 4-year institution increases the likelihood of receiving aid greater than federally defined need by a factor of 1.5. Urban. All else equal, attending a rural (rather than urban) institution increases the likelihood of receiving aid greater than federally defined need by a factor of 1.5 (1/0.68). Plains-Southwest. A student attending school in the Plains states or the Southwest is 1.7 times more likely to receive aid greater than federally defined need than a similar student attending school in other regions of the country. The aid package variables represent the source of the “majority” of the student’s aid, if there was a majority source. The omitted reference group is the category of people whose majority of aid comes from federal grants such as Pell and Supplemental Educational Opportunity Grants (SEOG). About 17 percent of the sample falls into the reference group. In general, the people who had a majority of their aid coming from federal grants were less likely to receive aid greater than the federally defined need than any other group (as defined by majority of aid source). Majority from Non-Need-Based Federal Loans. Holding all else equal, a student who receives a majority of aid from federal, non-need-based loans was almost 5 times more likely to receive aid greater than federally defined need than a student who receives the majority of aid from federal grants. Majority from Federal Work Study and PLUS Loans to Parents. Holding all else equal, a student who receives a majority of aid from federal work study and PLUS loans is about 6 (6.02) times more likely to receive aid greater than federally defined need than a student who receives the majority of aid from federal grants. Majority from Nonfederal Grants, Scholarships and Work Study. Holding all else equal, a student who has a majority of aid coming from nonfederal grants or scholarships, work study, Veterans/Department of Defense benefits, Vocational Rehabilitation assistance or other non-loan sources is about 6.12 times more likely to receive aid greater than federally defined need than a student who receives the majority of aid from federal grants. Majority from Nonfederal Loans. Holding all else equal, a student who receives a majority of aid from nonfederal loan sources is about 7.8 times more likely to receive aid greater than federally defined need than a student who receives the majority of aid from federal grants. No Majority. A student who has no distinct majority source of aid is 2.33 times more likely to receive aid greater than federally defined need than a student who the majority of aid from federal grants. Number of Aid Components in Aid Packages. Having more aid sources in a student’s aid package results in a higher probability of receiving aid greater than the federally defined need. Having an additional aid component increases the likelihood of receiving aid greater than federally defined need by a factor of 1.33. This means that having five sources of aid, rather than one source of aid, can cause a three-fold increase in the likelihood of receiving aid greater than federally defined need (2.994 = 1.314). Substitutable Loans. Receiving loans that can be substituted for EFC is associated with a great increase in the likelihood of receiving aid greater than federally defined need. This can be attributed to the fact that aid greater than federally defined need can be accounted for by substitutable loans for over 85 percent of the students who received such aid (628,000 out of 732,000). Dependent. All else equal, being a dependent increases the probability of receiving aid greater than federally defined need three–fold. White. Being white as opposed to nonwhite almost doubles the chances of getting aid greater than federally defined need (1.75). GPA. GPA is usually calculated on a 4-point scale. In the NPSAS data set, GPA is multiplied by 100 or reported on a 400-point scale. In our analysis, we have GPA ranging from 0 to 40, such that a unit increase in our GPA variable (say from 37 to 38) represents a 0.1 change in grade point average as it is usually calculated (3.7 to 3.8). An odds ratio of 1.05 should thus be interpreted as follows: On a 4-point GPA scale, increasing GPA by one- tenth of one point (2.53 to 2.63) increases the likelihood of receiving aid greater than federally defined need by 5 percent. Thus, a change of one grade point (2.5 to 3.5) increases the likelihood of receiving aid greater than federally defined need by 63 percent (1.0510). Income. For every $5,000 change in income, the probability of receiving aid greater than federally defined need increases by 6 percent. Thus, a $50,000 increase in income (say between someone earning $25,000 and someone earning $75,000) results in a 79 percent (1.0610 = 1.79) increase in the likelihood of receiving aid greater than federally defined need. Plains-Southwest. A student attending school in the Plains states or the Southwest is 1.77 times more likely to receive aid greater than federally defined need than a similar student attending school in other regions of the country. Private University. Attending a private university decreases the likelihood of receiving aid greater than federally defined need. Someone who attends a public university increases his or her chances of receiving aid greater than the federally defined need by a factor of 1.35 (1/0.74). The aid package variables represent the source of the “majority” of the student’s aid, if there was a majority source. The omitted reference group is the category of people whose majority of aid comes from federal grants such as Pell and SEOG. About 17 percent of the sample falls into the reference group. In general, the students who had a majority of their aid coming from federal grants were less likely to receive aid greater than federally defined need than any other group (as defined by majority of aid source). Majority from Non-Need-Based Federal Loans. A student who receives a majority of aid from federal non-need-based loans is 6.6 times more likely to receive aid greater than federally defined need than a student who receives the majority of aid from federal grants. Majority from Federal Work Study and PLUS Loans to Parents. A student who receives a majority of aid from federal work study and PLUS loans is 3 times more likely to receive aid greater than federally defined need than a student who receices the majority of his aid in the form of federal grants. Majority from Nonfederal Grants, Scholarships and Work Study. Holding all else equal, a student who receives a majority of aid from nonfederal grants or scholarships, work study, Veterans/Department of Defense benefits, Vocational Rehabilitation assistance or other nonloan sources is about 20 times more likely to receive aid greater than federally defined need than a student who receives the majority of aid from federal grants. No Majority. A student who has no distinct majority source of aid is about 7 times more likely to receive aid greater than federally defined need than a student who receives the majority of aid from federal grants. Number of Aid Components in Aid Packages. Having more aid sources in a student’s aid package results in a higher probability of receiving aid greater than federally defined need. Having an additional aid component increases the likelihood of receiving aid greater than federally defined need by a factor of 1.2. This means that having seven sources of aid rather than one source of aid can double the likelihood of receiving aid greater than federally defined need (2.14 = 1.214). In addition to the name above, Mary Crenshaw, Patrick diBattista, Nagla’a El-Hodiri, Kathy Hurley, Joel Marus, John Mingus, Doug Sloane, and Wendy Turenne made important contributions to this report.
Over half of the $80.4 billion in financial aid provided to college students in the 2000-01 school year came from the federal government in the form of grants and loans provided under Title IV of the Higher Education Act (HEA). To help finance their education, students and families may have received other funds from states, private groups or lenders, and/or the schools themselves. We initiated this study to, among other things, determine how often federal financial aid recipients received aid that was greater than their federally defined need and what cost or other implications might result from changing HEA to limit such aid. We found that in school year 1999-2000, of the 3.4 million full-time/full-year federal aid recipients, 22 percent (732,000) received a total of $2.96 billion in financial aid that was greater than their federally defined financial need. Of these, 628,000 received an estimated $2.72 billion in such aid by obtaining non-need-based loans--which we identify as substitutable loans--that families borrow to meet their expected family contribution. Title IV allows for students and families to obtain these non-need-based loans to meet their expected family contribution. Another 104,000 federal aid recipients received an estimated $238 million in such aid as a result of receiving a combination of aid from federal and nonfederal sources. Changing the HEA to limit the receipt of aid that is greater than students' federally defined financial need is not likely to achieve significant federal savings, although, the use of substitutable loans may increase overall student indebtedness. In terms of cost implications, limiting those instances where federal aid recipients receive substitutable loans--which is the main reason why students received aid greater than their federally defined need--will not likely result in significant savings. While the government will not have to pay default claims or special allowance payments on loans it guarantees, it would forego any interest earnings on loans it makes directly. Any savings from limiting these loans would be substantially less than the total amount of the loans made--the $2.72 billion. However, the widespread use of substitutable loans may increase the average debt of borrowers and may affect Education's ability to help students and their families maintain their loan debt at manageable levels.
Overall, agencies are taking high-risk problems seriously, trying to correct them, and making progress in many areas. The Congress has also acted to address several problems affecting these high-risk areas through oversight hearings and specific legislative initiatives. Full and effective implementation of legislative mandates, our suggestions, and corrective measures by agencies, however, has not yet been achieved because the high-risk areas involve long-standing problems that are difficult to correct. The following discussion provides a quick synopsis of progress and remaining challenges related to many high-risk areas. Detailed information on the current status of all 25 high-risk areas, which are listed in appendix I, is available in our overview report, quick reference guide, and individual reports included in our set of 1997 high-risk reports. Reports included in this series are listed at the end of this testimony. Our high-risk initiative has monitored five areas that affect accountability and cost-effective management of Department of Defense (DOD) programs: financial management, contract management, inventory management, weapon systems acquisition, and the Corporate Information Management (CIM) initiative. These areas are key to effectively managing DOD’s vast resources, including a budget of over $250 billion in fiscal year 1996 and over $1 trillion in assets worldwide. While improvement activities have been started, DOD’s high-risk problems are especially serious and much remains to be done to resolve them. First, DOD’s lingering financial management problems are among the most severe in government. For example, the Department has acknowledged over 30 material weaknesses that cross the spectrum of its financial operations, including continuing problems in accurately accounting for billions of dollars in problem disbursements. Also, DOD has reported that of its nearly 250 financial systems only 5 conform fully with governmentwide financial systems standards. Further, financial audits have highlighted significant deficiencies in every aspect of DOD’s financial management and reporting, resulting in the failure of any major DOD component to receive a positive audit opinion. Since 1990, auditors have made over 400 recommendations aimed at helping to correct these weaknesses. Deficiencies such as these prevent DOD managers from obtaining the reliable financial information needed to make sound decisions on alternate uses for both current and future resources. DOD’s financial management leaders have recognized the importance of tackling these problems and have many initiatives under way to address widespread financial management problems. Fixing DOD’s financial management problems is also critical to the resolution of the Department’s other high-risk areas. In addition, as DOD seeks to streamline its contracting and acquisition processes—including contract administration and audit—to adjust to reduced staffing levels, new business process techniques will be key to accomplishing effective and efficient oversight in the future. DOD contracts now cost about $110 billion annually. Without an improved and simplified contract payment system, DOD continues to risk overpaying contractors millions of dollars. DOD is aware of the seriousness of its payment problems and is taking steps to address them. Also, DOD needs to further strengthen its oversight of contractor cost-estimating systems, which are critical to ensuring sound price proposals and reducing the risk that the government will pay excessive prices. While DOD has improved its oversight of contractors’ cost-estimating systems, poor cost-estimating systems remain an area of concern at some contractor locations. Further, about half of DOD’s centrally managed inventory of spare parts, clothing, medical supplies, and other secondary inventory items, which totaled about $70 billion in September 1995, does not need to be on hand to support war reserves or current operating requirements. DOD has had some success in addressing its inventory management problems and is in the midst of changing a culture that believed it was better to overbuy items than to manage with just the amount of stock needed. Also, with reduced force levels and the implementation of some of our recommendations, DOD has reduced its centrally managed inventory by about $20 billion. DOD has implemented certain commercial best practices, but only in a very limited manner and has made little progress in developing the management tools needed to help solve its long-term inventory management problems. Consequently, inventory managers continue to have difficulty managing DOD’s multibillion dollar inventory supply systems efficiently and effectively. Also, despite DOD’s past and current efforts to reform its acquisition system, wasteful practices still add billions of dollars to defense weapon systems acquisition costs, which are about $79 billion annually. DOD continues to (1) generate and support acquisition of new weapon systems that will not satisfy the most critical weapon requirements at minimal cost and (2) commit more procurement funds to programs than can reasonably be expected to be available in future defense budgets. Many new weapon systems cost more and do less than anticipated, and schedules are often delayed. Moreover, the need for some of these costly weapons, particularly since the collapse of the Soviet Union, is questionable. Finally, DOD started the CIM initiative in 1989 with the expectation of saving billions of dollars by streamlining operations and implementing standard information systems supporting such important business areas as supply distribution, material management, personnel, finance, and transportation. However, 8 years after beginning CIM, and after spending a reported $20 billion, DOD’s savings goal has not been met because the Department has not yet implemented sound management practices. Not surprising, the results of DOD’s major technology investments have been meager and some investments are likely to result in a negative return on investment. The Department estimates that it will spend more than an additional $11 billion on system modernization projects between now and the year 2000. As part of its Clinger-Cohen Act implementation efforts, the Department is establishing a framework to use its planning, programming, and budgeting system to better manage this investment. While this framework is a step in the right direction, these corrective actions are just the beginning. At the Internal Revenue Service (IRS) we have monitored four high-risk areas that affect IRS’ ability to ensure that all revenues are collected and accounted for: financial management, accounts receivable, filing fraud, and tax systems modernization (TSM). In 1995, IRS reported collecting $1.4 trillion from taxpayers, disbursing $122 billion in tax refunds, and managing an estimated accounts receivable inventory of $113 billion in delinquent taxes. The reliability of IRS’ financial information is critical to effectively manage the collection of revenue to fund the government’s operations. However, our audits of IRS’ financial statements have identified many significant weaknesses in accurately accounting for revenue and accounts receivable, as well as for funds provided to carry out IRS’ operations. IRS has made progress in improving payroll processing and accounting for administrative operations and is working on solutions to revenue and accounts receivable accounting problems. However, much remains to be done, and effective management follow-through is essential to achieving fully the goals of the CFO Act. In addition, IRS is hampered in efficiently and effectively managing its huge inventory of accounts receivable due to inadequate management information. The root cause here is IRS’ antiquated information systems and outdated business processes, which handle over a billion tax returns and related documents annually. IRS has undertaken many initiatives to deal with its accounts receivable problems, including correcting errors in its tax receivable masterfile and attempting to speed up aspects of the collection process. Efforts such as these appear to have had some impact on collections and the tax debt inventory, but many of the efforts are long-term in nature and demonstrable results may not be available for several years. Further, IRS’ efforts to reduce filing fraud have resulted in some success, especially through more rigid screening in the electronic filing program, but this continues to be a high-risk area. IRS’ goal is to increase electronic filings, which would strengthen its fraud detection capabilities. But to achieve its electronic filing goal, IRS must (1) identify those groups of taxpayers who offer the greatest opportunity for filing electronically and (2) develop strategies focused on eliminating or alleviating impediments that have inhibited those groups from participating in the program. In attempting to overhaul its timeworn, paper-intensive approach to tax return processing, IRS has spent or obligated over $3 billion on its TSM efforts. This program has encountered severe difficulties. Currently, funding for the initiative has been curtailed, and IRS and the Department of the Treasury are taking several steps to address modernization problems and implement our recommendations. However, much more progress is needed to fully resolve serious underlying management and technical weaknesses. Also, Medicare—the nation’s second largest social program—is inherently vulnerable to and a perpetually attractive target for exploitation. The Congress and the President have been seeking to introduce changes to Medicare to help control program costs, which were $197 billion in fiscal year 1996. At the same time, they are concerned that the Medicare program loses significant amounts due to persistent fraudulent and wasteful claims and abusive billings. The Congress has passed the Health Insurance Portability and Accountability Act of 1996 to protect Medicare from exploitation by adding funding to bolster program safeguard efforts and making the penalties for Medicare fraud more severe. Effective implementation of this legislation and other agency actions is key to mitigating many of Medicare’s vulnerabilities to fraud and abuse. Also, the Health Care Financing Administration (HCFA), which runs the Medicare program, has begun to acquire a new claims processing system, the Medicare Transaction System (MTS), to provide, among other things, better protection from fraud and abuse. In the past, we have reported on risks associated with this project, including HCFA’s plan to implement the system in a single stage rather than incrementally, difficulty in defining requirements, inadequate investment analysis, and significant schedule problems. HCFA has responded to these concerns by (1) changing its single-stage approach to one under which the system will be implemented incrementally and (2) working to resolve other reported problems. Since our high-risk program began 7 years ago, we have called attention to difficulties major lending agencies—the Departments of Housing and Urban Development (HUD), Education, and Agriculture—have experienced in managing federal credit programs and the government’s resulting exposure to large losses. As of September 30, 1995, total federal credit assistance outstanding was reported to be over $941 billion, consisting of (1) $204 billion in loans receivables held by federal agencies, including $160 billion in direct loans and $44 billion in defaulted guaranteed loans that are now receivables of the federal government, and (2) $737 billion in loans guaranteed by the federal government. HUD is responsible for managing more than $400 billion in insured loans; $435 billion in outstanding securities; and, in fiscal year 1995, over $31.8 billion in discretionary budget outlays. However, effectively carrying out these responsibilities is hampered by HUD’s weak internal controls, inadequate information and financial management systems, an ineffective organization structure, and an insufficient mix of staff with the proper skills. These problems are not new—we reported them in 1995 and they were a major factor contributing to the incidents of fraud, waste, abuse, and mismanagement reported in the late 1980s. HUD has undertaken some improvement efforts to correct these problems through such means as implementing a new management planning and control program. However, HUD’s improvement efforts are far from fruition, and long-standing, fundamental problems remain. HUD’s program will remain high risk until the agency completes more of its planned corrective actions and the administration and the Congress reach closure on a restructuring that (1) focuses HUD’s mission and (2) consolidates, reengineers, and/or reduces HUD’s programs. What is needed is for the administration and the Congress to agree on the future direction of federal housing and community development policy and put in place the organizational and program delivery structures that are best suited to carry out that policy. Actions by the Department of Education, combined with legislative changes, have achieved some results in addressing many of the underlying problems with the student financial aid programs’ structure and management. In fiscal year 1995, the federal government paid out over $2.5 billion to make good its guarantee on defaulted student loans—an amount that represents an improvement over the last several years. The Department has taken many administrative actions to correct problems and improve program controls, but it must overcome management and oversight problems that have contributed to abuses by some participating schools. Since our last high-risk report series in 1995, the Congress has enacted legislation—Title VI of the Federal Agriculture Improvement and Reform Act of 1996—to make fundamental changes in the farm loan programs’ loan-making, loan-servicing, and property management policies. The Department of Agriculture is in the process of implementing the new legislative mandates and other administrative reforms to resolve farm loan program risks. The impact of these actions on the $17 billion farm loan portfolio’s financial condition will not be known for some time. The Debt Collection Improvement Act of 1996 also was enacted to expand and strengthen agencies’ debt collection practices and authorities. This important new legislation can provide a much needed new impetus to improve lending program performance, but it will take time to implement the act. Additional agency attention to improve lending management and actions by the Congress are necessary as well. With government downsizing, civilian agencies will continue to rely heavily on contractors to operate programs. While this approach can help to achieve program goals with a reduced workforce, it can also result in increased vulnerability to risks, such as schedule slippages, cost growth, and contractor overpayments. Our high-risk program has followed efforts to resolve contract management weaknesses undertaken by several of the government’s largest civilian contracting agencies—the Department of Energy (DOE), the National Aeronautics and Space Administration (NASA), and the Environmental Protection Agency (EPA) for the Superfund. Most of DOE’s $17.5 billion in 1995 contract obligations was for its management and operating contracts. DOE has made headway in overcoming its history of weak contractor management through a major contract reform effort that has included developing an extensive array of policies and procedures. Although the Department recently adopted a policy favoring competition in the award of these contracts, in actual practice most contracts continue to be made noncompetitively. NASA has made considerable progress in better managing and overseeing contracts, for which it spends about $13 billion a year. The improvements have included establishing a process for collecting better information for managing contractor performance and placing greater emphasis on contract cost control and contractor performance. Our most recent work, however, identified additional problems in contract management and opportunities for improving procurement oversight. For the past several years, EPA has focused attention on strengthening its management and oversight of Superfund contractors. Nonetheless, EPA remains vulnerable to contractor overpayments. At the same time, the magnitude of the nation’s hazardous waste problem, estimated to cost hundreds of billions of dollars, calls for the efficient use of available funds to protect public health and the environment. In addition to the 20 areas we previously designated high risk, we are adding 5 new ones. We are alerting the Congress to these new areas because they involve serious problems: fraud and abuse in benefit claims, widespread computer security weaknesses, inefficient Department of Defense operation and support activities, the possibility of disastrous computer disruptions in service to the public, and the potential for a costly, unsatisfactory 2000 Decennial Census. The first newly designated high-risk area involves overpayments in the Supplemental Security Income (SSI) program, which provided about $22 billion in federal benefits to recipients between January 1, 1996, and October 31, 1996. One root cause of SSI overpayments, which have grown to over $1 billion annually, is the difficulty the Social Security Administration has in corroborating financial eligibility information that program beneficiaries self report and that affects their benefit levels. Determining whether a claimant’s impairment qualifies an individual for disability benefits can often be difficult as well, especially in cases involving applicants with mental impairments and other hard-to-diagnose conditions. Second, information systems security weaknesses across government have now been designated high risk. These weaknesses pose high risk of unauthorized access and disclosure or malicious use of sensitive data. Many federal operations that rely on computer networks are attractive targets for individuals or organizations with malicious intention. Examples of such operations include law enforcement, import entry processing and various financial transactions. Most notably, DOD’s systems may have experienced as many as 250,000 attacks from hackers during 1995 alone, with about 64 percent of them being successful and most going undetected. Since June 1993, we have issued over 30 reports describing serious information security weaknesses at major federal agencies. In September 1996, we reported that during the previous 2 years, serious information security control weaknesses had been reported for 10 of the 15 largest federal agencies. We have made dozens of recommendations to individual agencies and the Office of Management and Budget for improvement, and they have started acting on many of them. Third, DOD’s efforts to reduce its infrastructure will now be monitored as part of our high-risk efforts. Over the last 7 to 10 years, DOD has reduced operations and support costs, which will amount to about $146 billion this year. However, billions of dollars are wasted annually on inefficient and unneeded DOD activities. DOD has, in recent years, undergone substantial downsizing in force structure. However, commensurate reductions in operations and support costs have not been achieved. Reducing the cost of excess infrastructure activities is critical to maintaining high levels of military capacities. Expenditures on wasteful or inefficient activities divert limited defense funds from pressing defense needs, such as the modernization of weapon systems. Fourth, we have designated another serious governmentwide computer information systems issue, the Year 2000 Problem, as a new high-risk area. This problem poses the high risk that computer systems throughout government will fail to run or malfunction because computer equipment and software were not designed to accommodate the change of date at the new millennium. For example, IRS’ tax systems could be unable to process returns, which in turn could jeopardize the collection of revenue and the entire tax processing system. Federal systems used to track student education loans could produce erroneous information on their status, such as indicating that an unpaid loan has been satisfied. Or the Social Security Administration’s disability insurance process could experience major disruptions because the interface with various state systems fails, thereby causing delays and interruptions in disability payments to citizens. The fifth new high-risk area involves the need for agreement between the administration and the Congress on an approach that will both minimize the risk of an unsatisfactory 2000 Decennial Census and keep the cost of doing it within reasonable bounds. The longer the delay in securing agreement over design and funding, the more difficult it will be to execute an effective census, and the more likely it will be that the government will have spent billions of dollars and still have demonstrably inaccurate results. The country can ill afford an unsatisfactory census at the turn of the century, especially if it comes at a substantially higher cost than previous censuses. The census results are critical to apportioning seats in the House of Representatives; they are also used to allocate billions of dollars in federal funds for numerous programs and to guide the plans for decisions of government, business, education, and health institutions in the multibillion dollar investments they make. Shifting to the future, the government can gain major benefits by focusing on the resolution of high-risk problems and fully and effectively implementing the legislative foundation established for broader management reforms. As countless studies we have performed have long noted and our high-risk series of reports demonstrates, federal agencies often fail to appropriately manage their finances, identify clearly what they intend to accomplish, or do the job effectively with a minimum of waste. Left unresolved, persistent and long-standing high-risk areas will result in the government continuing to needlessly lose billions of dollars and missing huge opportunities to achieve its objectives at less cost and with better service delivery. The 25 areas that are the focus of our high-risk program cover almost all of the government’s annual $1.4-trillion revenue collection efforts and hundreds of billions of dollars in annual federal expenditures. Consequently, further progress to fully and effectively implement actions to resolve high-risk problems can result in substantial savings, for example, by reducing Medicare losses due to fraudulent and abusive claims, which could be from $6 billion to as much as $20 billion based on 1996 outlays; decreasing SSI overpayments, which have grown to over $1 billion a year; cutting back further on unneeded centrally managed defense inventories, which DOD succeeded in reducing by $23 billion during the 6-year period from 1989 to 1995; implementing better practices for acquiring weapon systems and reducing defense infrastructure, which are two areas that each experience billions of dollars in unneeded costs annually; and adopting improved contract management practices, as NASA is doing with considerable progress. For instance, NASA lowered the value of contract changes for which prices had not yet been negotiated from $6.6 billion in December 1991 to less than $500 million in September 1996. In addition, overcoming several high-risk problems has great potential for increased collections or other monetary gains to the government. For instance, these benefits are possible by further preventing or deterring tax filing fraud, which involved over 62,000 fraudulent returns with refunds of almost $132 million in 1995; reducing the growing inventory of tax assessments, which was $216 billion at the end of fiscal year 1996; ensuring that duties, taxes, and fees on importations are properly assessed and collected by the Customs Service and that refunds of such amounts are valid; and continuing to implement improved credit management practices. For example, the Department of Education has increased collections on defaulted loans from $1 billion in fiscal year 1992 to $2 billion in fiscal year 1995. Information technology is now integral to nearly every aspect of federal government operations and thus, is pivotal to the government’s interaction with the public and critical to public health and safety issues. In the past 6 years, federal agencies have spent about $145 billion on information systems. Yet, despite years of experience in developing and acquiring systems, agencies across government continue to have chronic problems harnessing the full potential of information technology to improve performance, cut costs, and/or enhance responsiveness to the public. We have already discussed in this testimony the high risks associated with two multibillion dollar information systems modernizations—IRS’ tax systems modernization and DOD’s corporate information management initiative. In addition, the information systems modernization efforts of other agencies are at risk of being late, running over cost, and falling short of promised benefits. Our high-risk initiative includes two of these modernizations—those at the Federal Aviation Administration (FAA) and the National Weather Service (NWS). FAA’s $34-billion air traffic control (ATC) modernization has historically experienced cost overruns, schedule delays, and performance shortfalls. While FAA has had success on a recent small, well defined effort to replace one aging system, the underlying causes of its past problems in modernizing larger, more complex ATC systems remain and must be addressed for the modernization to succeed. We recently identified and made recommendations to correct several of these root causes, including (1) strengthening project cost estimating and accounting practices and (2) defining and enforcing an ATC-wide system architecture, and we have work under way to identify other improvements that could help to resolve the modernization’s long-standing problems. The success of NWS’ $4.5 billion modernization effort hinges on how quickly the Service addresses problems with the existing system’s operational effectiveness and efficient maintenance and on how well it develops and deploys the remaining system. NWS has acknowledged that a technical blueprint is needed and is currently developing one. To improve situations such as these and stop bad information technology investments, we have worked closely with the Congress to fundamentally revamp and modernize federal information management practices. Our study of leading public and private sector organizations showed how they applied an integrated set of management practices to create the information technology infrastructure they needed to dramatically improve their performance and achieve mission goals. These practices provide federal agencies with essential lessons in how to overcome the root causes of their chronic information management problems. The 104th Congress used these lessons to create the first significant reform in information technology management in over a decade: the 1995 Paperwork Reduction Act and the Clinger-Cohen Act of 1996. These laws require agencies to implement a framework of modern technology management—one that is based on practices followed by leading public and private sector organizations that have successfully used technology to dramatically improve performance and meet strategic goals. These laws emphasize involving senior executives in information management decisions, establishing senior-level Chief Information Officers, tightening controls over technology spending, redesigning inefficient work processes, and using performance measures to assess technology’s contribution to achieving mission results. These management practices provide a proven, practical means of addressing the federal government’s information problems, maximizing benefits from technology spending, and controlling the risks of systems development efforts. The challenge now is for agencies to apply this framework to their own technology efforts, particularly those at high risk of failure. Traditionally, federal agencies have used either the amount of money directed toward their programs, the level of staff deployed, or even the number of tasks completed as some of the measures of their performance. But at a time when the value of many federal programs is undergoing intense public scrutiny, an agency that reports only these measures has not answered the defining question of whether these programs have produced real results. For high-risk areas, measuring performance and focusing on results is key to pinpointing opportunities for improved performance and increased accountability. For instance, performance measures would be useful for guiding management of defense inventory levels to prevent the procurement of billions of dollars of centrally managed inventory items that may not be needed; reaching agreement with the Congress on and monitoring acceptable levels of errors in benefit programs, which may never be totally eliminated but can be much better controlled; monitoring loan loss levels and delinquency rates for the government’s direct loan and loan guarantee programs—multibillion dollar operations in which loses for a variety of programs involving farmers, students, and home buyers are expected but can be minimized with greater oversight; and assessing the results of tax enforcement initiatives, delinquent tax collection activities, and filing fraud reduction efforts. Yesterday, we testified before the Committee on using the Government Performance and Results Act of 1993 (GPRA) to assist congressional and executive branch decision-making. Under GPRA, every major federal agency must now ask itself basic questions about performance to be measured and how performance information can be used to make improvements. GPRA requires agencies to set goals, measure performance, and report on their accomplishments. This will not be an easy transition, nor will it be quick. GPRA will be more difficult for some agencies to apply than for others. But GPRA has the potential for adding greatly to government performance—a particularly vital goal at a time when resources are limited and public demand is high. To help the Congress and federal managers put GPRA into effect, we have identified key steps that agencies need to take toward its implementation, along with a set of practices that can help make that implementation a success. Reliable financial information is key to better managing government programs, providing accountability, and addressing high-risk problems. The government’s financial systems are all too often unable to perform the most rudimentary bookkeeping for organizations, many of which are oftentimes much larger than many of the nation’s largest private corporations. Federal financial management suffers from decades of neglect and failed attempts to improve financial management and modernize outdated financial systems. This situation is illustrated in a number of high-risk areas, including the weaknesses that permeate critical DOD financial management areas, the substantial improvements that are needed in IRS’ accounting and financial reporting, the significant problems that continue to be identified during audits of the Customs Service’s financial statements, and the fundamental control weaknesses that resulted in the HUD Inspector General being unable to give an opinion on the Department’s fiscal year 1995 financial statements. As a result of situations such as these, financial information has not been reliable enough to use in federal decision-making or to provide the requisite public accountability. Good information on the full costs of federal operations is frequently absent or extremely difficult to reconstruct, and complete, useful financial reporting is not yet in place. The landmark Chief Financial Officers (CFO) Act spelled out a long overdue and ambitious agenda to help resolve these types of financial management deficiencies. Important and steady progress is being made under the act to bring about sweeping reforms and rectify the devastating legacy from inattention to financial management. Moreover, the regular preparation of financial statements and independent audit opinions required by the 1990 act, as expanded by the Government Management Reform Act of 1994, are bringing greater clarity and understanding to the scope and depth of problems and needed solutions. Under the expanded CFO Act, the 24 largest agencies are required to prepare and have audited financial statements for their entire operations, beginning with those for fiscal year 1996. Together, these agencies account for virtually the entire federal budget. Also, the 1994 expansion of the act requires the preparation and audit of consolidated governmentwide financial statements, beginning with those for fiscal year 1997. Making CFO Act reforms a reality in the federal government remains a challenge and a great deal more perseverance will be required to sustain the current momentum and successfully overcome decades of serious neglect in fundamental financial management operations and reporting methods. But fully and effectively implementing the CFO Act is a very important effort because it is a key to achieving better accountability; implementing broader management reforms, such as GPRA; and providing the nation’s leaders and the public with a wealth of relevant information on the government’s true financial status. We will continue to identify ways for agencies to more effectively manage and control high-risk areas and to make recommendations for improvements that can be implemented to overcome the root causes of these problems. Also, we have long supported annual congressional hearings that focus on agencies’ accountability for correcting high-risk problems and implementing broad management reforms. Mr. Chairman, this concludes my statement. I would be happy to now respond to any questions. Financial management Contract management Inventory management Weapon systems acquisition Defense infrastructure (added in 1997) Tax Systems Modernization Air traffic control modernization Defense’s Corporate Information Management initiative National Weather Service modernization Information security (added in 1997) The Year 2000 Problem (added in 1997) Medicare Supplemental Security Income (added in 1997) Superfund Also, planning for the 2000 Decennial Census was designated high risk in February 1997. An Overview (GAO/HR-97-1) Quick Reference Guide (GAO/HR-97-2) Defense Financial Management (GAO/HR-97-3) Defense Contract Management (GAO/HR-97-4) Defense Inventory Management (GAO/HR-97-5) Defense Weapon Systems Acquisition (GAO/HR-97-6) Defense Infrastructure (GAO/HR-97-7) IRS Management (GAO/HR-97-8) Information Management and Technology (GAO/HR-97-9) Medicare (GAO/HR-97-10) Student Financial Aid (GAO/HR-97-11) Department of Housing and Urban Development (GAO/HR-97-12) Department of Energy Contract Management (GAO/HR-97-13) Superfund Program Management (GAO/HR-97-14) The entire series of 14 high-risk reports is numbered GAO/HR-97-20SET. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed major government programs and operations it has identified as high risk because of vulnerability to waste, fraud, abuse, and mismanagement and legislative and agency actions that have resulted in progress towards resolving these problems. GAO noted that: (1) without additional attention to resolve problems in the 25 areas that are the current focus of GAO's high-risk initiative, the government will continue to miss huge opportunities to save billions of dollars, make better investments to reap the benefits of information technology, improve performance and provide better service, and more effectively manage the cost of government programs; (2) effective and sustained follow-through by agency managers is essential to make further headway and achieve greater benefits; (3) continued oversight by Congress will add essential impetus to ensuring progress as well; (4) landmark legislation passed by Congress in the 1990s has established broad management reforms, which, with successful implementation, will help resolve high-risk problems and provide greater accountability in many government programs and operations; (5) overall, agencies are taking high-risk problems seriously, trying to correct them, and making progress in many areas; (6) Congress has also acted to address several problems affecting these high-risk areas through oversight hearings and specific legislative initiatives; (7) full and effective implementation of legislative mandates, GAO suggestions, and corrective measures by agencies, however, have not yet been achieved because the high-risk areas involve long-standing problems that are difficult to correct; (8) federal agencies often fail to appropriately manage their finances, identify clearly what they intend to accomplish, or do the job effectively with a minimum of waste; (9) the Government Performance and Results Act (GPRA) requires agencies to set goals, measure performance, and report on their accomplishments; (10) GPRA will be more difficult for some agencies to apply than for others, but GPRA has the potential for adding greatly to government performance, a particularly vital goal at a time when resources are limited and public demand is high; (11) reliable financial information is key to better managing government programs, providing accountability, and addressing high-risk problems, but financial information has not been reliable enough to use in federal decisionmaking or to provide the requisite public accountability; (12) the landmark Chief Financial Officers Act spelled out a long overdue and ambitious agenda to help resolve financial management deficiencies; and (13) important and steady progress is being made under the act to bring about sweeping reforms and rectify the devastating legacy from inattention to financial management.
In conducting their operations, farmers are exposed to financial losses because of production risks—droughts, floods, and other natural disasters—as well as price risks. The federal government has played an active role in helping to mitigate the effects of these risks on farm income by promoting the use of crop insurance. RMA has overall responsibility for administering the federal crop insurance program, including controlling costs and protecting against fraud, waste, and abuse. RMA partners with 15 private insurance companies that sell and service the federal program’s insurance policies and share a percentage of the risk of loss and opportunity for gain associated with the policies. Through the federal crop insurance program, farmers insure against losses on more than 100 crops. These crops include major crops—such as corn, cotton, soybeans, and wheat, which accounted for three-quarters of the acres enrolled in the program in 2011—as well as nursery crops and certain fruits and vegetables. For the purposes of this report, we generally refer to participants in the federal crop insurance program as participating farmers. Most crop insurance policies are either production-based or revenue- based. For production-based policies, a farmer can receive a payment if there is a production loss relative to the farmer’s historical production per acre. Revenue-based policies protect against crop revenue loss resulting from declines in production, price, or both. The federal government encourages farmers’ participation in the federal crop insurance program by subsidizing their insurance premiums and acting as the primary reinsurer for the private insurance companies that take on the risk of covering, or “underwriting,” losses to insured farmers. A common measure of crop insurance program participation is the percentage of planted acres nationwide for major crops that are enrolled in the program. In addition, the federal government pays administrative expense subsidies to insurance companies as an allowance that is intended to cover their expenses for selling and servicing crop insurance policies. In turn, insurance companies use these subsidies to cover their overhead expenses, such as payroll and rent, and to pay commissions to insurance agencies and agents. Companies also incur expenses associated with verifying—adjusting—the amount of loss claimed. These expenses include, for example, loss adjusters’ compensation and their travel expenses to farmers’ fields. The financial relationships among the federal government, private insurance companies, agents, and farmers are illustrated in figure 1. For 2011, the federal government’s subsidy costs were about $7.4 billion for crop insurance premiums and about $1.3 billion for administrative expenses. Crop insurance premium subsidies are not payments to farmers, but they can be considered a financial benefit. Without a premium subsidy, a participating farmer would have to pay the full amount of the premium. The administrative expense subsidies also can be considered a subsidy to farmers; with these subsidies, crop insurance premiums are lower than they would otherwise be if the program followed commercial insurance practices. In private insurance, such as automobile insurance, these administrative expenses typically are included in the premium that a policy holder pays. ARPA and the 2008 farm bill set premium subsidy rates, that is, the percentage of the premium paid by the government. Premium subsidy rates vary by the level of insurance coverage that the farmer chooses and the geographic diversity of the crops insured. For most policies, the statutory subsidy rates range from 38 percent to 80 percent. Table 1 shows the total costs of subsidies for all crop insurance premiums and administrative expenses for 2000 through 2011. The table shows that premium subsidies have generally increased since 2000, both in dollars and as a percentage of total premiums. The premium subsidy rates, authorized by ARPA, became effective in 2001. Premium subsidies increased, as a percentage of total premiums, from 37 percent in 2000 to 60 percent in 2001. In addition, premium subsidies rose as crop prices increased. As crop prices increase, the value of the crops being insured increases, which results in higher crop insurance premiums and premium subsidies. For example, the prices of major crops were substantially higher in 2011 than in 2006, and premium subsidies in 2011 (about $7.4 billion) were substantially higher than in 2006 (about $2.7 billion). USDA forecasts that the prices of major crops—corn, cotton, soybeans, and wheat—will continue to be substantially higher than 2006 prices through 2016. Administrative expense subsidies also increased because of higher crop prices. However, RMA capped administrative expense subsidies in the 2011 standard reinsurance agreement (SRA), a cooperative financial agreement between USDA and insurance companies. These changes became effective in 2011. As a result, administrative expense subsidies were lower in 2011 than they otherwise would have been. The federal government provides crop insurance subsidies to farmers in part to achieve high crop insurance participation and coverage levels, which are intended, according to USDA economists, to reduce or eliminate the need for ad hoc disaster assistance payments to help farmers recover from natural disasters, which can be costly. For example, under three separate congressionally authorized ad hoc crop disaster programs, USDA provided $7 billion in disaster assistance payments to farmers whose crops were damaged or destroyed by natural disasters from 2001 through 2007. Congress established a standing disaster program in the 2008 farm bill— the Supplemental Revenue Assistance Payments Program. Under this program, Congress funded a $3.8 billion permanent trust fund and directed the Secretary of Agriculture to make crop disaster assistance payments to eligible farmers who suffer crop losses on or before September 30, 2011. USDA—through FSA—began making disaster payments under this program in early 2010 for crop losses incurred in 2008. To qualify for a disaster assistance payment under this program, a farmer must have purchased either federal crop insurance coverage or be covered under the Noninsured Crop Disaster Assistance Program for all crops of economic significance on their farming operation. Without reauthorization, the Supplemental Revenue Assistance Payments Program will not make payments on losses caused by natural disasters that occurred after September 30, 2011. Farmers’ participation in the federal crop insurance program and spending on ad hoc disaster assistance have been policy issues for more than 30 years. According to a 2005 USDA publication, Congress passed the Federal Crop Insurance Act in 1980 to strengthen participation in the crop insurance program with the goal of replacing the costly disaster assistance programs.acres enrolled in the program, the percentage of eligible acres of major crops and the percentage of a crop’s market value insured—the coverage level. According to the USDA publication, the government has historically Crop insurance participation can be measured by attempted to increase participation by subsidizing premiums. Under the 1980 law, the government offered premium subsidy rates of up to 30 percent. However, by 1994, less than 40 percent of eligible acreage was enrolled in the program, and Congress had passed ad hoc disaster assistance totaling nearly $11 billion. In order to increase participation, according to the USDA publication, the Federal Crop Insurance Reform Act of 1994 increased premium subsidy rates. Farmers responded by enrolling more acres. Enrollment was about 100 million acres in 1993 before the act and about 182 million acres in 1997. Under ARPA, premium subsidy rates increased again in 2001. Farmers subsequently purchased more insurance at higher coverage levels. With the increases in acres enrolled and coverage levels, premium subsidy costs increased. The 2005 USDA publication noted that by 2004 premium subsidies totaled nearly $2.5 billion and had become an increasingly costly way of encouraging participation. As shown in table 1, premium subsidies reached $7.4 billion in 2011. From 2008 through 2010, annual payments to farmers for their crop insurance claims averaged about $6 billion. Most claims are legitimate, but some involve fraud, waste, or abuse, according to RMA’S data mining contractor. USDA’s Office of the Inspector General has reported that fraud is commonly perpetrated through false certification of one or more of the basic data elements, such as production history, essential for RMA to determine program eligibility or validity of claims. Crop insurance fraud cases can be particularly complex in their details and correspondingly time-consuming to review. These fraud cases sometimes involve multiple individuals working together, such as farmers, insurance agents, and insurance loss adjusters. Claim payments based on fraudulent crop insurance losses sometimes result in comparatively large monetary costs to USDA. Waste is incurring unnecessary costs as a result of inefficient or ineffective practices, systems, or controls. Waste includes improper payments that may be caused by errors in data upon which claim payments are based. Abuse occurs when a participating farmer’s actions defeat the intent of the program, although no law, regulation, or contract provision may be violated. For example, under the Federal Crop Insurance Act, RMA must offer coverage for prevented planting—that is, if farmers cannot plant a crop for specified reasons, prevented planting coverage enables them to receive a claim payment. In 2005, we noted instances in which FSA county officials stated they believed that some farmers in their counties who claimed prevented planting losses never intended to plant or did not make a good faith attempt to plant their crop but still received prevented coverage claim payments. In 2011, RMA issued guidance to its field offices and insurance companies to address abuse involving prevented planting. RMA uses data mining—a technique for extracting knowledge from large volumes of data—to detect potential cases of fraud, waste, or abuse by (1) developing scenarios of potential program abuse by farmers, insurance agents, and loss adjusters and (2) querying the database containing crop insurance data and information on weather, soil, and land surveys to generate reports and lists of participating farmers with anomalous claim payments. RMA has contracted with the Center for Agribusiness Excellence, located at Tarleton State University in Stephenville, Texas, to conduct data mining since 2001. Following USDA written procedures, RMA and the insurance companies are to use data mining results to conduct reviews of the claims to determine if there is actual fraud, waste, or abuse. The data mining tools that RMA uses include the following: List of farmers with anomalous claim payments. Through data mining, RMA develops a list of farmers with anomalous claim payments.RMA annually provides this list to FSA, which assists RMA in monitoring these farmers. Under USDA guidance, FSA county offices are to conduct two inspections (postplanting and preharvest) for each policy these farmers hold. FSA county offices are then to report to RMA on whether they inspected the crop and, if so, whether the inspection determined that (1) the inspected farmer’s crop was in good condition; (2) the inspected farmer’s crop was not in good condition, but other farmers’ crops in the local area were in good condition; or (3) the inspected farmer’s crop was not in good condition, and other farmers’ crops in the local were also not in good condition. List of insurance agents and adjusters with anomalous losses. ARPA requires the Secretary of Agriculture to establish procedures that RMA can use to develop a list of insurance agents and loss adjusters with anomalous losses—losses that are higher than those of their peers in the same geographic area—and to review this list to determine whether the anomalous losses are the result of fraud, waste, or abuse. RMA uses data mining and scenarios it has developed for fraud, waste, and abuse to identify these insurance agents and adjusters. The RMA contractor’s data mining reports identify individual farmers with anomalous claim payments or insurance agents and adjusters with anomalous losses, but these anomalies only indicate potential cases of fraud, waste or abuse. These claims and losses may be legitimate, resulting from unusual weather or other conditions on a farm. As such, a portion of each list inevitably represents “false positives”—farmers whose claims were valid. To determine if there is actual fraud, waste, or abuse, RMA or the insurance company must engage in additional review. Such reviews may require RMA or the company to, among other things, analyze the claims, appraisal sheets, special adjuster reports, photographs, and receipts for inputs, such as seeds and fertilizer. These reviews are needed to determine the validity of the data mining reports; providing feedback on the reports’ validity to the data mining contractor enables RMA’s contractor to refine its data mining tools, thereby improving the detection of fraud, waste and abuse. RMA administers the crop insurance program through the SRA. This agreement establishes the terms and conditions under which insurance companies that sell and service policies have to operate. Under the 2011 SRA, insurance companies are to conduct reviews, including inspections of crop insurance policies for which anomalies have been identified through data mining, and report the results to RMA. These reviews are not to exceed 3 percent of eligible crop insurance contracts (about 30,000 policies), unless RMA provides notice that additional reviews are required. The SRA also requires insurance companies to conduct inspections or monitoring programs for agents and loss adjusters that RMA has identified as necessary for protecting the program’s integrity. Unlike the crop insurance program, many USDA farm programs— including income support programs, conservation programs, and disaster assistance programs—have statutory income and payment limits that apply to individual farmers and legal entities. Income limits set the maximum amount of income that a person or legal entity can earn and still remain eligible for certain farm program payments. For example, a person or legal entity with an average adjusted gross farm income (over the preceding 3 tax years) exceeding $750,000 is generally ineligible for direct payments. Payment limits set the maximum payment amount that a person or legal entity can receive per year from a farm program. For example, for direct payments, the payment limit in the 2008 farm bill is generally $40,000 per person or legal entity. For a disaster assistance program, the annual payment limit is $100,000 per person or legal entity. Additional income and payment limits for selected farm programs are described in appendix II. GAO, Farm Program Payments: USDA Needs to Strengthen Regulations and Oversight to Better Ensure Recipients Do Not Circumvent Payment Limitations, GAO-04-407 (Washington, D.C.: Apr. 30, 2004). Revenue Election Program payments under the 2008 farm bill,individual or entity must be “actively engaged in farming.” To be considered actively engaged in farming, an individual must, among other things, make significant contributions to a farming operation in (1) capital, land, or equipment and (2) personal labor or active personal management. An entity is considered actively engaged in farming if, among other things, the entity separately makes a significant contribution of capital, land, or equipment, and its members collectively make a significant contribution of personal labor or active personal management. In addition, participants in many farm programs who farm in areas identified as having highly erodible land or a wetland must comply with certain land and environmental conservation requirements for payment eligibility purposes. Participants who fail to abide by or apply approved conservation practices on land identified as highly erodible or a wetland are subject to payment reductions or total ineligibility for program payments. According to our analysis of RMA data for 2011, the federal government would have achieved savings in the crop insurance program by limiting premium subsidies for crop insurance participants, as payments are similarly limited for other farm programs. A decision to limit or reduce premium subsidies to achieve cost savings raises other considerations, such as the potential effect of such a limit on the financial condition of large farms and on program participation. Without limits on the premium subsidies in the crop insurance program, the nearly 900,000 farmers participating in the program received premium subsidies of $4.7 billion in 2010 and $7.4 billion in 2011. Applying limits on premium subsidies to participating farmers, similar to the payment limits for other farm programs, would lower program costs and save federal dollars, according to our analysis of RMA data. Using a limit of $40,000 per participating farmer for premium subsidies for this period— the limit applied to direct payments—we identified significant potential savings to the federal government—savings of up to $358 million for 2010 and $1 billion for 2011. The amount of these savings may depend on whether, and the extent to which, farmers and legal entities reorganized their business to avoid or lessen the effect of limits on premium subsidies. As we have previously reported regarding payment limits for other farm programs, some farming operations may reorganize to overcome payment limits to maximize their farm program benefits. For these farmers and legal entities, it is unclear whether further reorganization to lessen the effect of limits on premium subsidies would occur. In addition, in some instances, the requirement that an individual or entity be actively engaged in farming to receive farm program benefits is likely to prevent the creation of entities in order to avoid a limit on premium subsidies. Finally, some farmers would likely begin to report their spouse as a member of the farming operation, which under payment limit rules enables an operation to double the amount of benefits it can receive. In particular, if a $40,000 limit on premium subsidies had been applied in 2010, up to 13,309 farmers—1.5 percent of all participating farmers— would have seen their subsidies reduced, for an annual savings of up to $358 million to the federal government. For 2011, if the limit had been applied, up to 33,690 farmers—3.9 percent of all participating farmers— would have received reduced subsidies, at an annual savings of up to $1 billion. The number of participating farmers receiving more than $40,000 in premium subsidies increased from 2010 to 2011 because crop prices increased. Higher crop prices increased the value of crops insured, resulting in higher crop insurance premiums and hence a higher subsidy level. Figures 2 and 3 provide more information about the distribution of premium subsidies among participating farmers in 2010 and 2011. The figures show the number of participating farmers by the level of premium subsidies that individual farmers (i.e., persons or legal entities) received. GAO-04-407. Since we issued this report, the 2008 farm bill decreased the incentive to reorganize a farming operation in order to avoid a limit on farm program payments by eliminating the “three-entity rule” and requiring direct attribution of payments to individuals. In 2010, the average value of the premium subsidies received by participating farmers was $5,339. Thirty-seven participating farmers each received more than $500,000 in premium subsidies. The participating farmer receiving the most in premium subsidies—a total of about $1.8 million—was a farming operation organized as a corporation that insured cotton, tomatoes, and wheat across two counties in one state. In addition, the cost of the administrative expense subsidies that the government spent on behalf of this corporation was about $309,000. Another of the 37 participating farmers was an individual who insured corn, forage, potatoes, soybeans, sugar beets, and wheat across 23 counties in six states, for a total of about $1.6 million in premium subsidies. In addition, the cost of the administrative expense subsidies that the government spent on behalf of this farmer was about $443,000. In 2011, the average value of the premium subsidies received was $8,312. Fifty-three of these farmers each received more than $500,000 in premium subsidies. The largest recipient was a corporation that insured nursery crops across three counties in one state, for a total of about $2.2 million in premium subsidies. In addition, the administrative expense subsidies that the government spent on behalf of this corporation totaled about $816,000. Another of the 53 farmers was an individual who insured canola, corn, dry beans, potatoes, soybeans, sugar beets, and wheat across eight counties in two states, for a total of about $1.3 million in premium subsidies. In addition, the administrative expense subsidies that the government spent on behalf of this farmer totaled about $499,000. Alternatively, recent studies—noting the rising cost of premium subsidies—have proposed reducing premium subsidy rates for all participating farmers to achieve savings.subsidy rate for 2010 and 2011 had been reduced by 10 percentage points—from 62 percent to 52 percent—for all participating farmers, the annual cost savings for those years would have been about $759 million and $1.2 billion, respectively. For example, if the premium We also examined the effect on costs for the federal crop insurance program of applying a crop insurance subsidy limit to administrative expense subsidies, as well as premium subsidies. Additional savings would be realized, according to our analysis. For example, if a limit of $40,000 per farmer for both premium subsidies and administrative expense subsidies had been applied to the crop insurance program for 2011, up to 52,693 farmers (6 percent of all participating farmers) would have seen their subsidies reduced, at an annual savings of up to nearly $1.8 billion to the federal government. In contrast, applying limits to premium subsidies alone would have resulted in a savings of about $1 billion. Additional information about the 2010 and 2011 cost of premium subsidies and administrative expense subsidies by farmer is in appendix IV. In addition to federal cost savings, we identified a number of other considerations that may come into play in deciding whether to limit premium subsidies to individual farmers. These considerations include (1) the potential effect on the financial condition of large farms (i.e., those with annual gross sales of $1 million or more), whose owners are most likely to be affected by subsidy limits; (2) the availability of other risk management tools against crop losses, such as marketing contracts; and (3) the potential effect on beginning and smaller farmers. In addition, we identified considerations associated with either limiting premium subsidies to large farmers or reducing premium subsidy rates for all farmers. The application of limits of $40,000 in premium subsidies to farmers participating in the federal crop insurance program would primarily affect farmers who have large farms. For example, as discussed earlier, using our data for 2011, these participating farmers represented 3.9 percent of the farmers participating in the crop insurance program in 2011 and accounted for 32.6 percent of the premium subsidies. In view of the insured value of these farmers’ crops, they likely had annual gross sales approaching or exceeding $1 million. In addition, the insured value of these farmers’ crops represented about 26 percent of the total value of insured crops in 2011. Limiting premium subsidies to farmers may raise concerns about how these limits could affect large farms’ financial condition. Based on our review of data from USDA’s Agricultural Resource Management Survey on the financial condition of farms, by farm size, large farms are better positioned than smaller farms to pay a higher share of their premiums. Specifically, according to the USDA data: During 2008 and 2009, the most recent years for which USDA data were available, the largest farms with crop insurance coverage (i.e., those with annual gross sales of $1 million or more) earned an average annual net farm income of about $561,000. In contrast, the next two farm categories (farms with annual gross sales of from $500,000 to $1 million and farms with annual gross sales of from $250,000 to $500,000) had average annual net farm incomes of about $184,000 and $92,000, respectively. The largest farms with crop insurance coverage had higher relative profitability as measured by rate of return on equity, which is the ratio of net farm income to the net worth of the farm. These farms had an average rate of return on equity of 8.8 percent. In contrast, the next two farm categories had rates of 4.5 percent and 1.9 percent, respectively. The largest farms had higher debt-to-asset ratios than the next two farm categories,covering principal payments and interest on term debt was greater. Furthermore, a high debt-to-asset ratio is not necessarily a problem, as long as the rate of return on assets exceeds the interest rate on the funds borrowed. On average, farms with sales greater than $5 million generate more net cash income per dollar of assets than other farms, and the larger gross cash income can be used to pay interest or reduce loan balances. but the largest farms’ ability to service debt by In addition, regarding the financial condition of large farms, a related consideration is the global competiveness of U.S. agriculture. According to critics of limits on farm program benefits, larger farms should not be penalized for the economies of size and efficiencies they have achieved, and farm programs should help make U.S. farmers more competitive in global markets. If the large farmers affected by a limit on premium subsidies were to reduce their coverage, they may be able to self-insure through a variety of risk management methods, including the following: Marketing contracts. Marketing contracts reduce price risks and are already used by many large farmers. These contracts are either verbal or written agreements between a buyer and a farmer that set a price for a commodity before harvest or before the commodity is ready to be marketed. Futures contracts and hedging. A futures contact is a financial contract obligating the buyer to purchase an asset (or the seller to sell an asset), such as a commodity, at a predetermined future date and price. Futures contracts detail the quality and quantity of the underlying asset and are standardized to facilitate trading on a futures exchange. Futures can be used to hedge on the price movement of the underlying asset. For example, a producer of corn could use futures to lock in a certain price and manage risk (hedge). Crop and other enterprise diversification. Diversification is a risk management strategy that involves participating in more than one activity. A crop farm, for example, may have several productive enterprises (i.e., several different crops or both crops and livestock), or may operate nonadjacent parcels so that local weather disasters are less likely to reduce yields for all crops simultaneously. Liquid credit reserves. Farmers may maintain liquid credit reserves, such as an open line of credit, to generate cash quickly to meet financial obligations in the face of an adverse event. Liquid credit reserves reflect unused borrowing capacity. Private insurance. Certain agricultural risks—such as the risks associated with hail and other weather events damage—are insured by private companies without subsidized premiums. Unlimited premium subsidies for individual farmers and farm entities may compound challenges that beginning and smaller farmers already face. For example, we reported in 2007 that the challenges facing beginning farmers include obtaining capital to purchase land and that the rising cost of land, driven in part by farm program subsidies, may make it difficult for beginning farmers to purchase land. According to USDA studies, farm program payments and other benefits, such as premium subsidies, result in higher prices to buy or rent land because, in some cases, the benefits go directly to landowners—resulting in higher land value—and in other cases the benefits go to tenants, prompting landlords to raise rental rates. Furthermore, a recent USDA report explained how farm program payments may provide an advantage to larger farms.report, “For some farmers, payments may provide opportunities to increase the size of their operation. A steady stream of income may allow recipients to gain access to higher levels of credit or may allow them to increase their rental or purchase bids for land. This may provide opportunities for them to increase in size while driving out competition from smaller farms that don’t have access to the same levels of capital, which can impact the overall structure of agriculture.” We identified additional considerations associated with either limiting premium subsidies to large farms or reducing premium subsidy rates for all farmers. Premium subsidy limits or reduced premium subsidy rates could lead to lower participation in the federal crop insurance program and higher disaster assistance payments to farmers. In the past, Congress has authorized ad hoc disaster assistance payments to help farmers whose crops were damaged or destroyed by natural disasters. However, in view of the nation’s budgetary pressures, Congress may be less willing to approve such payments than it has in the past. In addition, according to a the increasing importance of crop Congressional Budget Office report,insurance to private lenders who provide farm loans may cause farmers to continue to participate in the crop insurance program, even if premium subsidies were reduced. Furthermore, assuming they are eligible to purchase unsubsidized crop insurance, farmers could still enroll all of their eligible crop acres in the program, making them eligible to receive claim payments on these acres. In the event of a loss, farmers who chose to maintain crop insurance coverage as they had in the past would then have the same level of protection. As a member of the World Trade Organization, the United States has made commitments to limit domestic agricultural support that is most likely to distort trade. Under the current World Trade Organization agreement, the United States is committed to spending no more than $19.1 billion per year on this support. Keeping this domestic agricultural support below this limit is likely to be a consideration of policymakers when they are developing or modifying farm programs. In August 2011, when the United States reported its domestic agricultural support for 2009 to the World Trade Organization, it included the value of crop insurance premium subsidies—$5.4 billion—in its submission as nonproduct- specific support. This $5.4 billion was the largest amount reported as nonproduct-specific support, which totaled $6.1 billion. However, under the current agreement, nonproduct-specific support in 2009 did not count toward the United States’ limit of $19.1 billion. Since 2001, RMA has used data mining tools to prevent and detect fraud, waste, and abuse in the crop insurance program by either farmers or insurance agents and adjusters, but it has not maximized their use to realize potential additional savings, largely because of competing compliance review priorities. In particular, using data mining tools, RMA develops lists of farmers with anomalous claim payments and informs these farmers that their fields will be inspected. In addition, investigators from RMA and USDA’s Office of the Inspector General sometimes use the list of agents and adjusters—identified through data mining—who have anomalous losses to corroborate information from other sources, but RMA has not conducted required reviews of agents and adjusters to determine whether anomalous losses are the result of fraud, waste, and abuse. RMA has not maximized the use of data mining tools, largely because of competing compliance review priorities, according to RMA documents we examined and officials we spoke with. In addition, RMA and FSA have not taken full advantage of data management techniques to increase the effectiveness of data mining. Using data mining, RMA has identified farmers with anomalous claim payments (listed farmers), as called for under USDA procedures developed pursuant to an ARPA requirement. In addition, as described in these procedures, at RMA’s request, FSA has sent letters informing these farmers that an official in the FSA county office would inspect the crop in at least one of their fields during the growing season and report the results of the field inspection to RMA. For example, in 2010—the most recent year for which data are available—RMA asked FSA to send letters to 1,747 listed farmers for each of their 2,452 policies with anomalous claim payments. RMA officials told us that the letters act as a warning and have substantially reduced total claims, by an estimated $838 million from 2001 through 2010. According to RMA officials, about two-thirds of the farmers who receive a letter from FSA reduce or stop filing claims for at least 2 or 3 years following receipt of the letter, and one-third of farmers make additional anomalous claims after being placed on the list; some of these claims are likely to be legitimate. The value of identifying farmers with anomalous claim payments may be undermined, however, by the fact that FSA does not complete all field inspections, and neither FSA nor RMA has a process to ensure that the results of all completed inspections are accurately reported, in accordance with USDA’s written procedures. In particular, in 2009 and 2010, RMA did not have field inspection results for 20 percent and 28 percent, respectively, of the fields for farmers listed as having anomalous claim payments. Four states—California, Colorado, Florida, and Texas— accounted for more than 40 percent of the missing data. For example, in Florida, FSA inspected a field for 8 of the 88 farmers with anomalous claim payments, according to our review of RMA records. If FSA does not complete all field inspections requested by RMA, not all farmers who have had anomalous claim payments will be subject to a review, increasing the likelihood that fraud, waste, or abuse may occur without detection. Table 2 shows the number of requests RMA made for FSA field inspections and the percentage of fields inspected for 2009 and 2010 in selected states. We identified three reasons for the absence of FSA field inspections. First, we found that FSA state offices are not required to monitor the completion of field inspections conducted by FSA county offices during the growing season. Without FSA state office monitoring of RMA- requested field inspections, FSA county offices may have less incentive to complete them. The FSA state offices in the six states we reviewed varied in how closely they monitor these field office inspections. In particular, in Minnesota and North Dakota, FSA state offices monitored completion of field inspections and, in 2010, in these states, FSA county offices had 111 and 183 field inspections to conduct, respectively, and completed 97 percent and 92 percent, respectively, of these inspections. In Minnesota, according to an FSA official we spoke with, the state office “encouraged” completion of field inspections by e-mailing all of the state’s FSA county offices a list of offices that had not completed their inspections. In North Dakota, a state FSA official attributed the state’s high rate of completed inspections largely to the fact that the state office monitors the rate of field inspections during the growing season, encouraging county offices that have not completed their inspections to do so. In contrast, California, Colorado, and Florida each had from 24 to 85 inspections to conduct and completed from none to 44 percent of these inspections. FSA officials from California and Florida agreed that it would be a good practice to monitor the completion of field inspections during the growing season at the state or district level to hold the county offices accountable. Second, FSA state officials in two of the four states with low inspection rates told us that insufficient resources were a key reason that county offices had not completed FSA inspections. These officials said that staffing had decreased for the past several years, but workload had increased. Third, some FSA state officials said that county office staff may hesitate to spend time and effort on inspections when they do not believe the inspections will have any impact. For example, they said that neither they nor county officials are informed of any action taken on their inspection results and that county officials are discouraged when their inspections do not result in actions against the farmers who appear to be engaged in negligent farming practices. However, at least one RMA compliance office—RMA’s Northern Regional Compliance Office—does provide feedback to FSA. This office is responsible for Iowa, Minnesota, Montana, North Dakota, South Dakota, Wisconsin, and Wyoming. According to an FSA official in North Dakota, RMA’s Northern Regional Compliance Office sends FSA state officials letters describing the results of reviews RMA requested the insurance companies to conduct based on FSA inspections, and the state officials are to forward this information to the counties. In addition, in 2010, as provided for under the SRA, RMA regional compliance offices directed insurance companies to review and report on farmers’ policies to ascertain whether fraud, waste, or abuse had occurred. These RMA offices have generally directed such reviews in two situations. First, when FSA inspectors reported that farmers’ crops were in worse condition than their peers, RMA regional compliance offices may direct companies to analyze the claims, documenting their work with appraisal sheets, special adjuster reports, pictures, and receipts for inputs such as seeds and fertilizer. Second, when farmers have anomalous claims data related to production history—a key factor in determining the total claims farmers make—RMA offices may direct the insurance companies to review these policies. USDA’s Office of the Inspector General reported in 2009 that RMA lacks documented procedures for following up on cases where farmers file claims after FSA’s field inspections indicate that crops are in good condition, and the farmer should not experience a loss. Under the Standards for Internal Control in the Federal Government, federal agencies are to employ control activities, such as clearly documenting internal control in management directives, administrative policies, or operating manuals, and the documentation is to be readily available for examination. Without documented agency policies and procedures for reviewing farmers’ policies identified by data mining reports, RMA cannot provide reasonable assurance that the farmers’ policies would be reviewed consistently. The Inspector General added that, since RMA’s resources are not unlimited, the agency should consider requiring that insurance companies perform as much of this work as possible. In this regard, as we noted above, about one-third of farmers listed as having anomalous claim payments again claim losses after being placed on the list. RMA has not maximized the use of the list of farmers with anomalous claim payments by, for example, directing insurance companies to review these farmers’ claims before paying them after FSA has reported the crops to be in good condition. According to three current and former RMA and Office of the Inspector General officials, because these farmers have previously had anomalous claim payments, their claims warrant a review, particularly when FSA’s inspection found their crops to be in good condition within weeks of the time that the farmer made a claim. Investigators from RMA and USDA’s Office of the Inspector General said that they use the list of insurance agents and loss adjusters with anomalous losses at times to corroborate information from other sources—such as the Office of the Inspector General’s fraud hotline— rather than as a basis for initiating reviews. However, RMA has not fully met a statutory ARPA requirement to conduct a review of agents and adjusters with higher losses than their peers to determine whether the losses associated with these individuals are the result of fraud, waste, or abuse. Officials from RMA and its data mining contractor told us of an instance in which an investigator in USDA’s Office of the Inspector General used the list of insurance agents and loss adjusters with anomalous losses as a starting point. Based on information in the list, the investigator began calling other USDA Inspector General investigative offices to determine whether they were also familiar with an agent who frequently had large anomalous losses. As a result of the list and telephone calls, the investigator identified an Inspector General hotline informant who had filed complaints about the same agent; the investigator initiated a review that became the largest crop insurance fraud case in U.S. history; this case involved tobacco farmers and insurance agents and adjusters working together. According to the Office of the Inspector General, the case may result in lower program costs of more than $80 million and continues to expand to more related reviews. We also found that RMA had not fully met a requirement to conduct a review of agents and adjusters with higher losses than their peers to determine whether the losses associated with these individuals are the result of fraud, waste, or abuse. In 2009, the Inspector General found that RMA was not reviewing these individuals and recommended that RMA develop policies and procedures for reviewing disparately performing agents and adjusters to assess whether the higher-than-average loss ratios for the agents and adjusters identified are the result of potential fraud, waste, or abuse. According to RMA officials we interviewed, RMA had not fully met this requirement because of resource constraints, among other things. These officials told us that investigating agents and loss adjusters is more complex and time-consuming than investigating individual farmers because one agent or adjuster may be identified with a dozen or more policies. In addition, officials said, the insurance company database used to develop the list includes agents who are not servicing the policy they are identified with. RMA officials told us that they have discussed the problem of inaccurate data with insurance companies and that the companies have made improvements, but they could not specify the extent of the problem or the improvements. Some RMA officials also pointed out that investigators use many different data mining tools and that it may be a better use of resources if the requirement for RMA to review the list of agents and adjusters was changed to allow RMA to review agents and adjusters and farmers using a variety of data mining tools, such as a software program that helps investigators identify links among producers, agents, or adjusters who are jointly engaged in activities that are anomalous. In addition, in response to another 2000 ARPA requirement, RMA included in the 2011 SRA a provision directing insurance companies to annually evaluate the performance of every agent and loss adjuster, including their loss ratios and the number and type of errors made by an agent or adjuster. The SRA does not, however, require additional focus on agents and adjusters identified as having anomalous losses through data mining. According to RMA documents we examined and five of the six RMA regional compliance officials we spoke with, RMA staff devote most of their time to three priority compliance activities aimed at detecting fraud, waste, and abuse in crop insurance. As a result, they have limited time to review individuals identified by data mining tools, such as the list of farmers with anomalous claim payments and the list of agents and adjusters with anomalous losses. Specifically, regional compliance offices are responsible for carrying out the following priority activities: Reconciling conflicting RMA/FSA data associated with an FSA disaster assistance program, the Supplemental Revenue Assistance Payments Program. RMA headquarters directs staff to reconcile RMA data, such as the number of acres for which a farmer is claiming a loss, with FSA data on the number of acres planted. According to an RMA document, as of August 5, 2011, FSA had identified more than 5,000 discrepancies for 2008 and 2009 and sent these to RMA, and RMA regional compliance offices had resolved over half of them. RMA officials said that they do not use data mining to determine priorities for reconciliations because they are required to reconcile every discrepancy referred by FSA, even if it is a $10 discrepancy. In addition, the RMA Administrator told us that insurance companies that are asked to help RMA resolve discrepancies have discussed the substantial costs they incur to correct small errors. Reviewing crop insurance policies to comply with the Improper Payments Information Act of 2002. RMA staff review 250 randomly selected policies each year, as agreed with the Office of Management and Budget, to estimate a payment error rate. Some RMA officials said that they would prefer to focus more attention on using data mining to review high-risk policies to detect and prevent fraud, waste, and abuse and focus less attention on conducting reviews to estimate an error rate. Reviewing potential cases of fraud, waste, or abuse in the crop insurance program that were identified through hotline calls and referred by USDA’s Inspector General. According to RMA data, each year the agency opens and closes several hundred cases of potential fraud, waste, and abuse involving thousands of crop insurance policies; some field offices reported having large backlogs of cases to address. Several RMA officials said they would like to use data mining to determine which referrals they should review, but Office of the Inspector General policy requires them to review all of these referrals within 90 days. They noted that some referrals provide little information or relate to small-value policies, but RMA may give priority to these referrals over reviews with a potentially greater cost-benefit result because of the Office of the Inspector General policy. We identified three areas in which RMA and FSA have not taken full advantage of data management techniques to increase the effectiveness of data mining: inaccurate and incomplete FSA field inspection data for listed farmers, the insufficiency of the data collected from insurance companies on the results of their reviews, and RMA’s not providing insurance companies with results for most FSA inspections. Certain FSA field inspection data for listed farmers may be inaccurate and incomplete because the results of the inspections may be reported late or not at all. This problem arises because RMA and FSA have a complicated process for transmitting the data, creating opportunities for errors and omissions. Specifically: Staff in about 1,000 FSA county offices transmit their field inspection data to nearly 50 state offices by e-mailing data, mailing CDs or paper documents, or inputting the data in their FSA computer systems. The FSA state offices e-mail or mail the data, in its different formats, to six RMA regional compliance offices. Two of the six RMA regional compliance offices retype the data into an RMA system, and the other four offices retype a small portion of the data—the field inspection date and crop conditions—into a spreadsheet that already contains the original data mining information, such as the policy number and participating farmer’s name. The six offices then send the FSA data to RMA’s data mining contractor for analysis. Through interviews with FSA state officials and a review of the data on FSA field inspection results, we identified several examples of errors and omissions that had occurred in the process of recording and transmitting the data from FSA to RMA and its data mining contractor for additional analysis and followup on anomalous claims and to its data mining contractor for further analysis. For example: Officials in three FSA state offices said that additional field inspections likely have been done even though the data for them are missing. They said that some county staff had not been trained on how to enter inspection results into the FSA computer system and therefore did not always report information on completed inspections to state FSA offices so that it could be provided to RMA. FSA state offices, at times, did not forward field inspection data to RMA for several months after the inspections were completed, according to our analysis of FSA records and an RMA official. All of the field inspection data for one state were missing from RMA’s data mining contractor records because the FSA state office provided the data to RMA after RMA had sent inspection data to the data mining contractor for analysis. At least 10 percent of the data for another state were missing for the same reason. One RMA official noted that FSA occasionally provides late responses for fields with crops in worse condition than others in the area. Such delays mean that RMA cannot ask insurance companies to review the fields for these policies before harvest or making a claim payment, when insurance adjusters could determine whether the crop was being deliberately managed in a way that reduces yield. According to RMA officials and contractor staff, they have recognized these problems and proposed using software that other USDA agencies use in a new process to transmit the data from the FSA county offices directly to a USDA system while providing access to RMA and FSA. They told us that they are planning to implement the new system before 2012 field inspections have begun and believe the new system will eliminate problems we identified. RMA does not collect sufficient data from insurance company reviews in an electronic format that facilitates its data mining, according to RMA officials. RMA uses an electronic form to collect data from all types of company reviews, including those that RMA requested as a result of data mining and those that were requested because of Office of the Inspector General hotline referrals. However, this form does not provide the data mining contractor with sufficient information on which records the insurance companies reviewed and why they reviewed these records in order to determine if an adjustment needs to be made to improve data mining, according to RMA officials and the data mining contractor. In addition, RMA officials and the data mining contractor told us that the electronic form does not provide an efficient way of sorting out the data needed for data mining. RMA officials said that more complete data on the insurance company reviews are important for improving data mining because insurance companies often have information that RMA does not have that can explain why an anomalous claim is being made. The data mining contractor stated that it had developed proposals for revising the electronic form to collect information that could help it improve data mining lists, such as the list of farmers with anomalous claim payments and agents and adjusters with anomalous losses. In 2009, the Inspector General also concluded that the data mining contractor needed such information to refine data mining reports. Without an electronic mechanism to collect sufficient data from insurance companies on their reviews, RMA is limited in the analyses it can conduct and in the improvements it can make in data mining. As a result, RMA may be missing opportunities for savings that result from better data mining. RMA officials said that they are considering making changes so that the data mining contractor receives additional information. RMA generally does not provide insurance companies with field inspection results for most FSA inspections—that is, those for fields in good condition—but provides them with the field inspection results for a small portion of the farmers—those with crops in worse condition than their peers. However, inspection information on fields in good condition is important—particularly for inspections that occurred shortly before a claim was made. Past cases have revealed that some farmers may harvest a high-yielding crop, hide the sale of that crop, and report a loss to receive an insurance payment. USDA’s Inspector General has reported on the need to use FSA field inspection information to identify potential fraud, waste, and abuse.on two farmers on the list of farmers with anomalous claim payments whose crops were in good condition, according to the FSA inspection; however, these farmers filed nearly $300,000 in claims a short time after the FSA inspection, and RMA did not notice the discrepancy. RMA’s data mining contractor stated that it could, with a few days of effort, provide all the FSA field inspection data to the insurance companies, including those on crops in good condition, which represent the bulk of inspections. Federal crop insurance plays an important role in protecting farmers from losses caused by natural disasters and price declines, and it has become one of the most important programs in the safety net for farmers. As we have discussed, unlike other farm programs, the crop insurance program does not limit the subsidies that a farmer can receive. Without subsidy limits, a small number of farmers receive relatively large premium subsidies and a relatively large share of total premium subsidies. In addition, premium subsidies for all farmers, which averaged 62 percent of premiums in 2011, have increased substantially since 2000. With increasing pressure to reduce the federal budget deficit and with record farm income in recent years, it is critical that taxpayer-provided funds for the farm safety net are spent as economically as possible. Limits on premium subsidies to individual farmers or reductions in the amount of premium subsidies for all farmers participating in the crop insurance program, or both limits and reductions, present an opportunity to save hundreds of millions of dollars per year for taxpayers without compromising this safety net. In addition, RMA has made substantial progress over the past decade in developing data mining tools to detect and prevent fraud, waste, and abuse from a list of farmers who have received payments for anomalous claims, but RMA’s use of these tools lags behind their development, largely because of competing priorities. By not maximizing the use of these tools, RMA may be missing opportunities to identify and prevent losses to the federal government that result from fraud, waste, or abuse. Furthermore, because FSA does not require its state offices to monitor, during the growing season, completion of its county office field inspections for farmers with anomalous claim payments, and because FSA does not always communicate its inspection results to RMA in a timely manner, RMA and FSA may not know about farmers who improperly manage their crops or falsely report losses. FSA state offices that do such monitoring seem to encourage a higher completion rate of county office field inspections. RMA has also not provided insurance companies with most FSA inspection results, particularly findings that crops were in good condition, or directed insurance companies to review the results of all completed FSA field inspections before paying claims that occur after inspections showed a crop was in good condition. As a result, insurance companies may not have information that could help them identify claims that should be denied. RMA has also not realized the potential of data mining tools to enhance its detection of fraud, waste, and abuse on the part of insurance agents and adjusters, including addressing the ARPA requirement to review agents and adjusters identified as having anomalous losses. Furthermore, RMA has not taken steps requiring minimal resources, for example, by directing insurance companies, during annual performance evaluations of agents and adjusters, to focus more attention on the list of agents and adjusters with such losses. In addition, RMA’s electronic form does not collect sufficient data from insurance companies on their reviews in order to facilitate the use of these reviews in data mining. To reduce the cost of the crop insurance program, Congress should consider limiting the subsidy for premiums that an individual farmer can receive each year or reducing the subsidy for all farmers participating in the program, or both limiting and reducing these subsidies. To help prevent and detect fraud, waste, and abuse in the federal crop insurance program, we recommend that the Secretary of Agriculture direct the Administrator of RMA and the Administrator of FSA, as appropriate, to take the following four actions: For the list of farmers with anomalous claim payments, encourage the completion of FSA county office inspections during the growing season by requiring FSA state offices to monitor the status of their completion. Maximize the use of the list of farmers with anomalous claim payments by, for example, ensuring that insurance companies receive the results of all FSA field inspections in a timely manner and directing insurance companies to review the results of all completed FSA field inspections before paying claims that occur after inspections showed the crop was in good condition. Increase the use of the list of agents and adjusters with anomalous losses through actions, such as directing insurance companies, during annual performance evaluations of insurance agents and adjusters, to focus more of their attention on the list of agents and adjusters with anomalous losses. Develop a mechanism, such as a revised electronic form, to collect additional data from insurance companies in order to facilitate the use of the companies’ reviews in data mining. We provided the Secretary of Agriculture with a draft of this report for review and comment. We received written comments from the acting USDA Under Secretary for Farm and Foreign Agricultural Services. In these comments, the acting Under Secretary stated it was ill advised for us to suggest that Congress consider limiting or reducing premium subsides without further study. The acting Under Secretary stated that in recommending a $40,000 limit on premium subsidies, the report does not fully account for all potentially negative impacts and costs resulting from such a change. However, as we state in the report, we do not recommend a $40,000 limit in premium subsidies per crop insurance participant. Instead, we used $40,000 as an example of a premium subsidy limit and noted that setting a premium subsidy limit higher or lower would have corresponding effects on cost savings. In addition, our report recognizes that setting a subsidy limit may have impacts, and we discuss some of these potential impacts. Moreover, at a time when the agriculture sector is enjoying record farm income and higher farmland values and the nation is facing severe deficit and long-term fiscal challenges, we believe that crop insurance premium subsidies—the single largest component of farm program costs—is a potential area for federal cost savings. Furthermore, the Administration’s budget for fiscal year 2013 and the Congressional Budget Office each proposed a reduction in premium subsidies. These subsidies increased fourfold, from $1.7 billion in 2002 to $7.4 billion in 2011. USDA agreed with one of our recommendations, and did not directly respond to the other three. Regarding our first recommendation— encouraging the completion of FSA county office inspections for the list of farmers with anomalous claim payments by requiring FSA state offices to monitor the status of their completion—USDA stated that it will update its written procedures to require FSA state offices to monitor county offices’ completion of these inspections. Regarding our second recommendation—that USDA maximize its list of farmers with anomalous claims by providing the results of completed FSA inspections to the insurance companies—USDA stated it is unlikely that FSA will be able to accomplish this recommendation, but that comment is not responsive to our recommendation. We clarified the language to say that insurance companies should receive the results of all inspections that have been completed. This effort would not entail additional work on the part of FSA. RMA’s data mining contractor told us that it could complete this activity within a few days after an inspection was completed. Regarding our third recommendation—to direct insurance companies, during annual performance evaluations of insurance agents and adjusters, to focus more of their attention on the list of agents and adjusters with anomalous losses than on others—USDA reported that it was issuing guidance directing companies to provide to USDA the results of reviews conducted on each agent/loss adjuster identified on the anomalous agent/loss adjuster list provided by RMA. We agree that providing guidance to the companies is important and continue to believe that directing insurance companies to focus more attention on these agents and loss adjusters during annual performance reviews would produce additional benefits. Regarding the fourth recommendation—to develop a mechanism, such as a revised electronic form, to collect additional data from insurance companies in order to facilitate the use of the companies’ reviews in data mining—USDA did not clearly state whether it agreed or disagreed. USDA stated that as one of its information systems projects matures, it will find better ways to record and gather data for data mining. However, we continue to believe that the data mining contractor needs additional data from insurance company reviews in order to improve data mining, and that specific direction from USDA is needed to acquire it. USDA comments and our response are in appendix V. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or shamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives were to determine (1) the effect on program costs of applying limits on farmers’ federal crop insurance subsidies, as payment limits are applied to other farm programs, and (2) the extent to which the U.S. Department of Agriculture (USDA) has used data mining tools to prevent and detect fraud, waste, and abuse in the crop insurance program. To address the first objective, we reviewed eligibility standards, such as adjusted gross income limits and payment limits, in the provisions of the Food, Conservation, and Energy Act of 2008 (2008 farm bill); other statutes; and USDA regulations. We also interviewed officials from USDA’s Farm Service Agency (FSA) and Risk Management Agency (RMA) regarding eligibility standards and payment limits in the 2008 farm bill for farm programs other than the crop insurance program. To determine the distribution of crop insurance subsidies among farmers who participate in the program, we analyzed RMA data for 2010 and 2011 on the number and percentage of farmers receiving various levels of subsidies and the locations of farmers who received higher subsidies. We selected $40,000 as an example of a potential subsidy limit because it is the payment limit for direct payments. Many participants in the crop insurance program also participate in other farm programs that are administered by USDA’s Farm Service Agency (FSA). Many of these other farm programs have payment limits based on benefits that are attributed to each interest holder in a farming operation. Under a scenario of a limit on premium subsidies, it is likely these same rules regarding the attribution of benefits would also apply to premium subsidies for the crop insurance program. Therefore, in our analysis, we attributed these subsidies for each policy to the interest holders in the policy. We did so based on the payment share of each interest holder as recorded in FSA’s validated Permitted Entity database that is used to ensure compliance with payment limit rules. For entities, we attributed benefits through four levels, as appropriate. We summed premium subsidies across policies for each crop insurance participant. For those that were not found in FSA’s Permitted Entity database or if RMA’s database contradicted FSA’s Permitted Entity database, we attributed premium subsidies by dividing it equally among the policy holder and the interest holders as reported in RMA’s database. These participants represented 18.5 percent of the entities. We also reviewed USDA and other studies that examined participation in the crop insurance program and premium subsidies. In addition, we reviewed USDA data on the financial condition of farms of various sizes. Furthermore, we reviewed USDA reports on the availability of private risk management tools against crop losses and the effects of farm program subsidies on beginning and smaller farmers. Finally, we reviewed farm and crop insurance industry organizations’ statements on the crop insurance program. To determine the extent to which USDA has used data mining tools to prevent and detect fraud, waste, and abuse in the crop insurance program, we analyzed how RMA uses two data mining lists—the list of farmers with anomalous claim payments and the list of insurance agents and adjusters with anomalous losses—and the methods it uses to develop these lists. We reviewed requirements in the Agricultural Risk Protection Act of 2000 and the current and former standard reinsurance agreement related to data mining, FSA guidance for field inspections, FSA letters to farmers with anomalous claim payments, data analyses and summaries on data mining tools developed by RMA’s data mining contractor, USDA’s Inspector General reports and testimonies, and reports of RMA completion of disaster payment reconciliations. We also interviewed RMA data mining contractor staff, and RMA officials at headquarters and six regional compliance offices to identify RMA’s uses of these data mining tools, weaknesses found in the tools, opportunities for increased use of them, or competing RMA priorities. We also interviewed officials with USDA’s Office of Inspector General on their views and uses of these tools. In addition, we worked with RMA’s data mining contractor to analyze 2009 and 2010 data on FSA’s completion of field inspections for policies of those farmers listed as having anomalous claim payments. We conducted tests of the reliability of the data, such as checking formulas, and found the data to be sufficiently reliable for the purposes of this report. We also interviewed officials with RMA and its data mining contractor to determine the process used to acquire FSA’s field inspection data. We interviewed officials with FSA’s headquarters office and the five FSA state offices for California, Colorado, Florida, North Dakota, and Texas to obtain information about these data, obstacles to completing the inspections, and suggestions for increasing We selected FSA’s North Dakota office their completion and reporting.because of its high completion rate of field inspections (96 percent) for 2009 and 2010 and large number of requests for field inspections (378). We selected the other four state offices because, over the 2-year period, they had low completion rates of field inspections (less than 33 percent) and at least 80 requests for field inspections. We conducted this performance audit from January 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. $500,000 (average adjusted gross nonfarm income over the 3 preceding tax years). Direct and Countercyclical Program direct payments $500,000 (average adjusted gross nonfarm income over the 3 preceding tax years). $750,000 (average adjusted gross farm income over the 3 preceding tax years). $500,000 (average adjusted gross nonfarm income over the 3 preceding tax years). $500,000 (average adjusted gross nonfarm income over the 3 preceding tax years). $500,000 (average adjusted gross nonfarm income over the 3 preceding tax years). $50,000 (annual rental payment) $1,000,000 (average adjusted gross nonfarm income over the 3 preceding tax years).$1,000,000 (average adjusted gross nonfarm income over the 3 preceding tax years). $300,000 (total for all contracts for fiscal years 2009 through 2012) This limit does not apply if more than 66.66 percent of the adjusted gross income—total of nonfarm adjusted gross income and farm adjusted gross income—was farm income. Figure 6 shows the locations of participating farmers who received more than $40,000 in premium subsidies for 2011. As the figure shows, many of these farmers were in the northern and southern plains. According to RMA officials, a region might have more farmers who received more than $40,000 in premium subsidies because farmers in the region have large- acreage farms; produce high-value crops, such as sugar beets; or have higher premium rates. For example, the average farm size in North Dakota is 1,241 acres, but the average size nationwide is 418 acres. In addition, high-value crops, such as sugar beets in North Dakota and fruits and vegetables in California, contribute to higher premiums and premium subsidies. Regarding premium rates, areas that have a higher risk of crop loss generally have higher premium rates. For example, the average premium rate in North Dakota is 17 percent, and the average premium rate nationwide is 10 percent. For 715,822 participating farmers, the sum of 2010 premium subsidies and 2010 administrative expense subsidies ranged from $1 to $10,000. 1. As we clearly state in the report, we do not recommend a $40,000 limit in premium subsidies per crop insurance participant. Instead, as we stated, we used $40,000 as an example of a premium subsidy limit and noted that setting a premium subsidy limit higher or lower would have corresponding effects on cost savings. In this connection, we provided information on the potential savings that would result if premium subsidies were limited to $100,000. Furthermore, limits on premium subsidies would not prevent potentially affected farmers from enrolling all of their crop acres in the crop insurance program and receiving claim payments when a loss occurs. The report also notes that savings could result from reducing the subsidy amount for all farmers participating in the program, or both limiting and reducing these subsidies. In proposing these changes to the crop insurance program, we also identified other considerations that would come into play, including the potential effect on large farms’ financial condition and on participation in the crop insurance program. 2. We disagree. This report does show regions of the country that would be more affected by a limit on premium subsidies. On page 19, we state that many of the participating farmers who received more than $40,000 in premium subsidies were in the northern and southern plains. Additional information on the locations of participating farmers who received more than $40,000 in premium subsidies for 2011 is presented in appendix III. 3. An assessment of the availability of credit to the agricultural sector was not the focus of our work, but our review of data from USDA’s Agricultural Resource Management Survey shows that larger farms, which are more likely to be affected by a limit on premium subsidies, generally have stronger financial ratios and credit worthiness than other farms participating in the crop insurance program. (See pages 21 and 22 of this report.) Furthermore, since we sent our draft report to USDA for comment, we identified two Federal Reserve Bank reports—one from the Federal Reserve Bank of Chicago, and one from the Federal Reserve Bank of Kansas City—that reported that credit conditions for farmers are favorable. In addition, if premium subsidies were limited, an affected farmer could still purchase crop insurance, although the premiums might not be subsidized or subsidized less than currently. Thus, an affected farmer would not lose access to credit. 4. USDA did not adjust its estimate of affected crop insurance participants and savings in premium subsidies to reflect how a limit on premium subsidies might actually be implemented. That is, we assume that any subsidy limit would be administered as USDA’s Farm Service Agency (FSA) administers payment limits for other farm programs—allocating the benefits according to the interest holders in the farming operation. Most participants in the crop insurance program also participate in other farm programs that FSA administers, and many of these other farm programs have payment limits based on benefits that are attributed to each interest holder in a farming operation. As explained in our methodology, in developing our estimate for a potential $40,000 subsidy limit, we used the payment share of each interest holder as recorded in FSA’s validated Permitted Entity database, which FSA uses to ensure compliance with payment limit rules for farm programs. Using FSA’s information on the payment share of each interest holder, we attributed subsidies for each crop insurance policy to the interest holders in the policy. Therefore, we estimated that up to 33,690 participating farmers would have been affected in 2011 by a reduced subsidy, for a savings of up to $1 billion if a $40,000 subsidy limit were applied. We believe our analysis provides a reasonable estimate of the number of participating farmers who might be affected by a limit on premium subsidies and the dollars that might be saved. (See app. I for more information on our methodology.) 5. As we note in this report, a limit on crop insurance premium subsidies would affect more farmers in some areas of the country than in other areas. We also note in the report that large farms are better positioned than smaller farms to pay a higher share of their premiums. Furthermore, a higher limit on premium subsidies would affect fewer farmers. In addition, limits on farm program benefits already have disproportionate impacts. For example, under the Supplemental Revenue Assistance Payments program and Noninsured Crop Disaster Assistance program, annual payments are limited to $100,000, which disproportionately affects farmers in regions that are more prone to natural disasters. In addition to a limit on premium subsidies, this report also examines reducing premium subsidy rates for all farmers, which would have a more proportionate effect across states and regions. However, it would also reduce subsidies for those who may be less able to afford higher premiums, particularly beginning and limited resource farmers, as well as socially disadvantaged farmers. 6. We do not agree that it would be virtually impossible to administratively track and control a limit on premium subsidies. Most farmers participating in the crop insurance program also participate in other farm programs that FSA administers. Many of the farm programs FSA administers already limit the payments an individual can receive. Therefore, we believe that FSA’s methods—which account for complicating factors such as the organization of farm businesses and multiple crops in multiple counties, and even multiple programs—could be applied to a limit on premium subsidies for crop insurance and that any addition to administrative burdens would not be significant. Moreover, as we stated in our report, premium subsidy rates vary by the level of insurance coverage that the farmer chooses and the geographic diversity of the crops insured. If RMA is capable of tracking these different subsidy rates, we believe USDA can also administer a subsidy limit. 7. We believe it would not be impractical to administer a limit on premium subsidies because of differences in dates and insurance periods. FSA attributes benefits to each individual or entity for each program that it administers. For each participant in a given program, payments are summed across all entities, crops, and counties for the crop year. Regarding livestock insurance, the amount of insurance purchased in comparison with crop insurance is very small. Moreover, this report did not discuss combining limits on premium subsidies for livestock insurance and crop insurance. 8. We do not agree that a limit on premium subsidies would prevent farmers from making sound and informed insurance choices. Under the crop insurance program, the amounts of a farmer’s premium subsidy and premium expense are estimated during the period before planting, when the farmer is making insurance choices. However, insurance companies determine the actual premium later in the growing season and bill the farmer at the end of the growing season. Therefore, to the extent that a limit on premium subsidies introduces additional uncertainty, it would likely be marginal. 9. We believe it is unlikely that a limit on premium subsidies would affect agricultural lenders’ decisions in providing farm operating loans. It is not clear how a limit on premium subsidies would introduce so much uncertainty about the amount of a farmer’s premium expenses that a lender could not decide whether to provide financing. Agricultural lenders already deal with a level of uncertainty about farmers’ revenues and expenses. In addition, lenders could require borrowers to purchase crop insurance. 10. As we stated in this report, the amount of savings from a limit on premium subsidies may depend on whether, and to what extent, farmers and legal entities reorganized their business to avoid or lessen the effect of limits on premium subsidies. In addition, some farmers would likely begin to report their spouse as a member of the farming operation, which, under payment limit rules, enables an operation to double the amount of benefits it can receive. Regarding potential reorganizations, most of the farmers and legal entities who participate in the crop insurance program also participate in FSA programs, and many of them have already reorganized their business because of these programs’ payment limits. These farmers and legal entities would be unlikely to reorganize further in response to a limit on premium subsidies. In addition, in some instances, the requirement that an individual or entity be actively engaged in farming to receive farm program benefits is likely to prevent the creation of entities in order to avoid a limit on premium subsidies. Furthermore, the 2008 farm bill decreased the incentive to reorganize a farming operation in order to avoid a limit on farm program payments by eliminating the “three-entity rule” and requiring direct attribution of payments to individuals. 11. This report includes information about how crop insurance participation and coverage levels may relate to spending on ad hoc disaster assistance. The report also notes that in view of the nation’s budgetary pressures, Congress may be less willing to approve ad hoc disaster assistance payments than it has in the past. In addition, the Administration’s proposed fiscal year 2013 budget addresses participation and ad hoc disaster assistance and states, “With current participation rates, the deep premium subsidies are no longer needed.” 12. In addition to federal cost savings, our report discussed several considerations that would come into play with limits on premium subsidies. Furthermore, we noted that FSA has extensive experience in administering limits on farm program benefits, which USDA does not recognize in its comments. We believe RMA could benefit from FSA’s experience in administering payment limits. 13. We recognize that FSA, like most federal agencies, faces resource constraints. However, as we have previously reported, effective strategies help set priorities and allocate resources to inform decision making and help ensure accountability. Such priority setting and resource allocation is especially important in a fiscally constrained environment. 14. We clarified the language to say that insurance companies should receive the results of all inspections that have been completed. This effort would not entail additional work on the part of FSA. RMA’s data mining contractor told us that it could complete this activity within a few days after an inspection was completed. 15. We are pleased that RMA is developing guidance and believe that this guidance may be a good first step towards increasing insurance companies’ focus on anomalous agents and loss adjusters, who warrant greater attention. However, we continue to believe that directing insurance companies to focus more attention on these agents and loss adjusters during annual performance reviews would produce additional benefits. 16. It is unclear from RMA’s response whether it agrees or disagrees with our recommendation. However, we continue to believe that the data mining contractor needs additional data from insurance company reviews in order to improve data mining and specific direction from the government to collect these data. In addition to the individual named above, Susan Offutt, Chief Economist; Thomas M. Cook, Assistant Director; Kevin S. Bray; Gary T. Brown; Barbara J. El-Osta; Beverly Peterson; Anne Rhodes-Kline; Jeremy Sebest; and Carol Herrnstadt Shulman made key contributions to this report. Crop Insurance: Opportunities Exist to Reduce the Costs of Administering the Program. GAO-09-445. Washington, D.C.: April 29, 2009. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-944T. Washington, D.C.: June 7, 2007. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-819T. Washington, D.C.: May 3, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant. GAO-07-760T. Washington, D.C.: April 19, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant. GAO-07-285. Washington, D.C.: March 16, 2007. Suggested Areas for Oversight for the 110th Congress. GAO-07-235R. Washington, D.C.: November 17, 2006. Crop Insurance: More Needs to Be Done to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse. GAO-06-878T. Washington, D.C.: June 15, 2006. Crop Insurance: Actions Needed to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse. GAO-05-528. Washington, D.C.: September 30, 2005. Crop Insurance: USDA Needs to Improve Oversight of Insurance Companies and Develop a Policy to Address Any Future Insolvencies. GAO-04-517. Washington, D.C.: June 1, 2004. Department of Agriculture: Status of Efforts to Address Major Financial Management Challenges. GAO-03-871T. Washington, D.C.: June 10, 2003. Crop Insurance: USDA Needs a Better Estimate of Improper Payments to Strengthen Controls Over Claims. GAO/RCED-99-266. Washington, D.C.: September 22, 1999. Crop Insurance: USDA’s Progress in Expanding Insurance for Specialty Crops. GAO/RCED-99-67. Washington, D.C.: April 16, 1999. Crop Insurance: Increases in Insured Crop Prices and Premium Rates Raise the Administrative Expense Reimbursement Paid to Companies. GAO/RCED-98-115R. Washington, D.C.: March 20, 1998. Crop Insurance: Opportunities Exist to Reduce Government Costs for Private-Sector Delivery. GAO/RCED-97-70. Washington, D.C.: April 17, 1997. Crop Insurance: Federal Program Faces Insurability and Design Programs. GAO/RCED-93-98. Washington, D.C.: May 24, 1993. Crop Insurance: Program Has Not Fostered Significant Risk Sharing by Insurance Companies. GAO/RCED-92-25. Washington, D.C.: January 13, 1992.
The U.S. Department of Agriculture (USDA) administers the federal crop insurance program with private insurance companies. In 2011, the program provided about $113 billion in insurance coverage for over 1 million policies. Program costs include subsidies to pay for part of farmers’ premiums. According to the Congressional Budget Office, for fiscal years 2013 through 2022, the program costs—primarily premium subsidies—will average $8.9 billion annually. GAO determined the (1) effect on program costs of applying limits on farmers’ premium subsidies, as payment limits are set for other farm programs, and (2) extent to which USDA uses key data mining tools to prevent and detect fraud, waste, and abuse in the program. GAO analyzed USDA data, reviewed economic studies, and interviewed USDA officials. If a limit of $40,000 had been applied to individual farmers’ crop insurance premium subsidies, as it is for other farm programs, the federal government would have saved up to $1 billion in crop insurance program costs in 2011, according to GAO’s analysis of U.S. Department of Agriculture (USDA) data. GAO selected $40,000 as an example of a potential subsidy limit because it is the limit for direct payments, which provide fixed annual payments to farmers based on a farm’s crop production history. Had such a limit been applied in 2011, it would have affected up to 3.9 percent of all participating farmers, who accounted for about one-third of all premium subsidies and were primarily associated with large farms. For example, one of these farmers insured crops in eight counties and received about $1.3 million in premium subsidies. Had premium subsidies been reduced by 10 percentage points for all farmers participating in the program, as recent studies have proposed, the federal government would have saved about $1.2 billion in 2011. A decision to limit or reduce premium subsidies raises other considerations, such as the potential effect on the financial condition of large farms and on program participation. Since 2001, USDA has used data mining tools to prevent and detect fraud, waste, and abuse by either farmers or insurance agents and adjusters but has not maximized the use of these tools to realize potential additional savings. This is largely because of competing compliance review priorities, according to GAO’s analysis. USDA’s Risk Management Agency (RMA), which is responsible for overseeing the integrity of the crop insurance program, has used data mining to identify farmers who received claim payments that are higher or more frequent than others in the same area. USDA informs these farmers that at least one of their fields will be inspected during the coming growing season. RMA officials told GAO that this action has substantially reduced total claims. The value of identifying these farmers may be reduced, however, by the fact that USDA’s Farm Service Agency (FSA)—which conducts field inspections for RMA—does not complete all such inspections, and neither FSA nor RMA has a process to ensure that the results of all inspections are accurately reported. For example, RMA did not obtain field inspection results for about 20 percent and 28 percent of these farmers, respectively, in 2009 and 2010. As a result, not all of the farmers RMA identified were subject to a review, increasing the likelihood that fraud, waste, or abuse occurred without detection. Field inspections were not completed, in part because FSA state offices are not required to monitor the completion of such inspections. In addition, RMA generally does not provide insurance companies with FSA inspection results when crops are found to be in good condition, although USDA’s Inspector General has reported this information may be important for followup. Past cases have revealed that some farmers may harvest a high-yielding crop, hide its sale, and report a loss to receive an insurance payment. Furthermore, RMA has not directed insurance companies to review the results of all completed FSA field inspections before paying claims that are filed after inspections show a crop is in good condition. As a result, insurance companies may not have information that could help them identify claims that should be denied. To reduce crop insurance program costs, Congress should consider limiting premium subsidies for individual farmers, reducing subsidies for all farmers, or both. GAO also recommends, in part, that USDA encourage the completion of field inspections. In commenting on a report draft, USDA did not agree that Congress should consider limiting premium subsidies, but GAO believes that when farm income is at a record high and the nation faces severe fiscal problems, limiting premium subsidies is an appropriate area for consideration. USDA agreed with encouraging the completion of field inspections.
The WBC program is administered through the Office of Women’s Business Ownership (OWBO) in SBA’s Office of Entrepreneurial Development (OED). The program was established by the Women’s Business Ownership Act of 1988 to provide long-term training, counseling, networking, and mentoring to women who own businesses or are potential entrepreneurs after Congress found that existing business assistance programs for small business owners were not addressing women’s needs. The program’s goal is to add more well-trained women entrepreneurs to the U.S. business community and to specifically target services to women who are socially and economically disadvantaged. In fiscal year 2007, SBA funded 99 WBCs throughout the United States and its territories. Private nonprofit organizations are eligible to apply for funds to set up WBCs, and successful applicants are initially awarded cooperative agreements for a maximum of 5 years. WBCs must raise matching funds from nonfederal sources such as state and local public funds, private individuals, corporations and foundations, and program income derived from WBC services. In the first 2 years of the 5-year award, each WBC is required to match SBA award funding at one nonfederal dollar for each two federal dollars. In the last 3 years, the match is one nonfederal dollar for each federal dollar. WBC award amounts cannot exceed $150,000 each fiscal year per recipient. Award amounts may vary depending upon a WBC’s location, staff size, project objectives, performance, and agency priorities. WBC funding is performance-based, and each additional 12-month budget period beyond the initial award may be exercised at SBA’s discretion. Among the factors involved in deciding whether to exercise an option for continued funding are the availability of funds, the extent to which past WBC funds were spent, and satisfactory performance against SBA- established performance measures, including the number of clients served and the number of jobs created. WBCs are required to provide this performance data to SBA in quarterly reports. In the Women’s Business Centers Sustainability Act of 1999, Congress established the sustainability pilot program because of concerns that WBCs could not become self-sustaining in 5 years and needed continued SBA funding. Under the sustainability pilot program, WBCs that had been receiving funding for 5 years could receive sustainability awards for an additional 5 years. Criteria for receiving awards under the pilot program were similar to those for receiving the initial awards. WBCs were assessed on their record of performance and had to provide nonfederal matching funds equal to one dollar for each federal dollar. Unlike the WBC regular award, WBC sustainability award amounts could not exceed $125,000 each budget year per recipient. As noted earlier, Congress recently replaced these sustainability awards with 3-year renewable awards of not more than $150,000 each year per recipient. SBA has not yet begun making these new awards. In addition to the WBC program, SBA’s SBDC and SCORE programs also provide training and counseling services to small business clients. The SBDC program was created by Congress in 1980. SBDC services include, but are not limited to, assisting prospective and existing small businesses with financial, marketing, production, organization, engineering, and technical problems and feasibility studies. Each state and U.S. territory has a lead organization that sponsors and manages the SBDC program. The lead organization coordinates program services offered to small businesses through a network of centers and satellite locations in each state that are located at colleges, universities, community colleges, vocational schools, chambers of commerce and economic development corporations. In fiscal year 2007, the SBDC program received $87 million to make awards to 63 lead SBDCs throughout the United States. The SCORE program was founded in 1964 as a nonprofit organization. Under the Small Business Act, as amended, SCORE is sponsored by and may receive appropriations through SBA. The SCORE program is designed to provide free expert advice to prospective and existing small businesses in all aspects of business formation, advancement, and problem solving. SCORE counselors are volunteers who assist clients through a Web site, SCORE chapter offices, SBA district offices, and other establishments. In fiscal year 2007, the SCORE program received $5 million to support its activities and currently has 389 chapters throughout the United States. Recent legislation addresses concerns about long-term funding for WBCs, but prior to this legislation, the funding structure had been in flux since the program’s inception in 1988. In establishing the WBC program in 1988, Congress authorized SBA to help private nonprofit organizations conduct projects that benefit small business concerns owned and controlled by women. The 1988 act allowed for demonstration projects that terminated in 1991. However, in 1991, Congress authorized SBA to make awards for 3- year projects, and in 1997 Congress authorized SBA to make awards to WBCs for 5-year projects. In its 1999 reauthorization of the WBC program, as noted earlier, Congress added 5-year sustainability funding for WBCs that successfully completed 5-year projects to provide additional time for the centers to become self-sustaining. Because the WBC program is a competitive discretionary award program, WBCs in the program compete annually for the maximum award amount but continue to receive SBA funds as long as their performance is satisfactory. WBCs that we spoke with identified two related factors that have largely been responsible for their funding uncertainties. First, because until recently the WBC program offered limited-term funding—in contrast to the SBDC and SCORE programs, which receive continuous funding— WBCs “graduated” from SBA support after 5 or 10 years. Several WBCs that we spoke with expressed concern about the funding term limits and pointed out that the SBDC and SCORE programs do not have the same limits, even though SBA also administers those programs. Some WBCs in both the regular and sustainability programs also said that they were concerned about their ability to continue operations after losing SBA support. Second, Congress did not make the additional 5-year term for sustainability funding permanent. Instead, Congress extended the pilot program with each SBA reauthorization, creating uncertainty that limited SBA’s ability to manage the program effectively and causing concern among the WBCs themselves. Several WBCs said that they were concerned that sustainability funding was not a permanent aspect of the WBC program. Several of the WBCs that we spoke with said that funding uncertainties made it difficult to establish an annual program budget with performance goals. Each year, SBA requires that WBCs participating in its program submit project-year proposals with performance goals in anticipation of an award. WBCs are not guaranteed funding each year because SBA makes awards each year at its discretion. Also, because the program is competitive and performance based, WBCs may receive varying award amounts each year. As noted, WBCs in the regular program can receive annual awards up to $150,000, and those in the sustainability program can receive annual awards up to $125,000. OMB’s 2007 PART report found that frequent changes by Congress in the WBC program’s funding structure, delays in extending sustainability funding, and uncertainty about the future had created challenges for the program. OMB’s report also noted that SBA had taken steps to foster more consistent management of the WBC program but added that long- term planning was problematic because of the program’s funding structure. When we spoke with officials at OMB, they emphasized that SBA appeared to be making a significant effort to assist WBCs, given the program’s limitations. They also noted that the funding challenges that WBCs faced after graduating from the sustainability pilot could be related to the fact that these organizations operate resource-intensive programs and collect nominal revenues in program fees, largely because of their focus on economically disadvantaged clients, causing them to rely heavily on external support. Our preliminary review indicates that WBCs that perform satisfactorily continue to receive funds until they complete the program, and SBA indicates that it will fund WBCs through the project term, subject to availability of funds. But SBA officials in headquarters and the district offices were aware of the challenges WBCs faced in planning annual budgets without knowing how much they would receive or whether sustainability funds would continue to be available. In discussing the WBC program’s limited term funding, some SBA district office officials emphasized that the agency had invested in creating successful WBCs and should be working to make those that performed well permanent SBA partners. Recent legislation for the WBC program replaces the sustainability pilot program with 3-year renewable awards, providing an opportunity for SBA to continue funding WBCs. Current program participants and those that have successfully graduated will be eligible to apply for continuous funding through these awards. The award process will remain competitive and the number of organizations competing could increase while SBA’s annual budget for the WBC program may not increase beyond the approximate $12 million provided in the last 5 years. However, increased award competition provides an opportunity for SBA to continue funding high-performing centers. Prior to the new program changes, SBA officials emphasized that the WBC program is the agency’s only performance based program and said that they believed this provided an incentive for WBCs to continuously improve. SBA officials told us that by the end of fiscal year 2007, 26 WBCs would have graduated since the beginning of the program. SBA has criteria for ranking new award applicants and performance-based criteria for placing existing program participants into three funding categories for annual awards. As a result of the new legislation, which allows graduated WBCs to re-enter the pool of applicants for continuous funding and which changes the existing 5-year sustainability project terms going forward, SBA has begun revising its existing award process. SBA just completed making WBC awards for fiscal year 2007 to fund activities in fiscal year 2008, and SBA officials told us that they plan to begin providing the 3-year renewable awards in fiscal year 2008. Our preliminary review found that SBA had developed written procedures for monitoring the performance and financial management activities of WBCs and has taken steps to measure the WBC program’s effectiveness. Since 1997, as a condition of continued funding, SBA has been required to assess WBCs’ performance at least annually through programmatic and financial examinations. SBA also requires that WBCs submit performance and financial reports quarterly to describe their progress in meeting annual performance goals and to detail program expenses that qualify for SBA reimbursement. Some of the performance data that SBA collects from WBCs are reported in the agency’s annual performance reports through several output and outcome measures that are meant to evaluate the WBC program’s performance and effectiveness. As part of a broader impact assessment of its business assistance programs, in 2004, SBA initiated a 3- year longitudinal study of the WBC program, surveying clients served by WBCs nationwide. SBA relies heavily on District Office Technical Representatives (DOTRs) to carry out oversight responsibilities, but our preliminary review suggests that the downsizing of SBA’s staffing may have created challenges for DOTRs in fulfilling their assigned responsibilities. District directors currently assign the role of DOTR as a collateral duty to district office staff. In 2001, we reported that DOTRs had been given an increased role in assessing WBCs’ performance to ensure that the programs were fiscally sound and functioning smoothly. To this end, we reported that DOTRs were receiving intensive training each year at the postaward conference at SBA headquarters on how to monitor the WBCs’ programmatic and financial activities. DOTRs are expected to conduct the WBC’s programmatic and financial examinations semiannually, but also have other program duties and full-time agency responsibilities. SBA has a list of 23 responsibilities for DOTRs, some of which involve oversight, including (1) reviewing the WBC’s requests for project revisions, (2) determining the extent to which the WBC is meeting the match requirement, (3) reviewing the scope and quality of services provided to clients, (4) reviewing all WBC signage and media, and (5) helping to resolve problems. DOTRs are also expected to act as advocates for the WBCs within their district. Some of the DOTRs’ responsibilities related to this role include (1) ensuring that the district office displays and distributes WBC brochures; (2) collecting success stories from WBCs to be used for publicizing the program; and (3) including WBCs in district office conferences, workshops, and other events for women business owners. The DOTRs’ total responsibilities for the WBC program appear to be substantial, particularly since this oversight is a collateral role. Given SBA’s downsizing in recent years, some DOTRs may have more responsibilities than they had in the past to perform their WBC program duties effectively, and others new to the role may lack the necessary experience and training. Although most WBCs we interviewed spoke positively of their relationship with their DOTR, several told us that the reduction in district office staffing had led to changes, including assigning DOTR responsibilities to a different district office staff member. DOTRs still attend required training for the WBC program annually at SBA headquarters, and SBA provides them with a handbook to assist them in performing their duties. However, district office staff at one location felt that DOTRs were not adequately trained to conduct the financial component of WBC programmatic and financial examinations and told us that SBA headquarters had previously coordinated financial examinations for WBCs. When we followed up with OWBO officials, they said that in 2004 a requirement was added that WBCs’ financial records be certified annually by a certified public accountant (CPA), both because the agency recognized that some DOTRs lacked this expertise and because of isolated incidents of mismanagement of WBC award funds. OWBO officials also said that they were coordinating with SBA’s Office of SBDCs, which is also under OED, to use SBDC financial examiners for these onsite financial reviews of WBCs but added that recently there had not been enough staff to do all of the reviews. The officials also said that OED was reviewing how future financial audits for all of SBA’s business assistance programs would be conducted. Our preliminary review found that SBA had taken some steps to adapt program oversight procedures to staffing changes in district offices. For example, before January 2007 DOTRs conducted programmatic and financial examinations four times a year, and SBA switched to semiannual examinations to conserve its staff resources. In March 2007, SBA also revised its reporting procedures for WBCs to streamline communication and reduce review and processing times. For example, WBCs had previously submitted quarterly financial reports with reimbursement requests through the district office but now submit them directly to OWBO and copy the district office. These and other revisions that SBA has made to date appear to have been made on an as-needed basis and were not part of a strategic process or plan to revise its oversight activities. WBCs also cited concerns about communication with SBA. One study that we reviewed reported that 54 percent of 52 WBCs surveyed said that SBA could improve its communication with them. OWBO, which administers the program, conducts monthly conference calls with the WBCs and DOTRs, but some WBCs said that the calls were not a good forum for asking questions though the topics covered in the call may raise questions. OWBO also uses email to communicate policy changes and make interim information requests, but several WBCs said these communications often came without sufficient explanation and mentioned areas in which policy changes or program requirements were unclear. The study specifically noted that better communication should include an effort to seek information from WBCs on how SBA’s frequent information requests and policy changes impacted WBC operations. Some WBCs also told us that they were not sure how well they were performing because they did not receive feedback on semi-annual examinations or the reports they submitted quarterly to SBA. SBA officials told us that they are aware of this concern and are taking steps to make the performance-based funding process more transparent. Based on our preliminary review, we found that the WBCs we spoke with focused on a different type of client than the SBDCs and SCORE chapters in their areas, and several WBCs actively coordinated with the other programs to avoid duplicating services. But based on our review to date, the centers appear to lack guidance and information from SBA on how to successfully coordinate. Consistent with the WBC program’s statutory authority and SBA requirements, WBCs tailor services to meet the needs of economically and socially disadvantaged women. According to one academic study and WBCs we reviewed, WBCs offered services emphasizing financial literacy and more intensive long-term business plan training. Through our work, we also found that WBCs tended to serve smaller businesses with fewer employees and lower revenues than SBDCs and SCORE. According to an SBA study of WBCs, WBC clients had businesses with an average of 2.5 employees that produced average annual revenues of $63,694, while other SBA business assistance programs served businesses with an average of 4.5 employees and $175,076 in annual revenue. Most WBCs told us that they referred clients to the SBDCs and SCORE when appropriate, and several coordinated services with the other programs to leverage resources and avoid duplication. SBA officials told us that they expected district offices to ensure that the programs did not duplicate each other, and the program requirement suggests that WBCs can promote coordination through co-sponsorship arrangements or memorandums of understanding. However, SBA has not provided detailed guidance explaining how WBCs could effectively coordinate with SBDC and SCORE. Lacking such guidance, WBCs used a variety of approaches to facilitate coordination. Some coordination efforts were initiated by local business assistance providers, including WBCs, and involved a memorandum of understanding or regularly scheduled meetings. For example, a WBC in Wisconsin coordinated with SBDC, SCORE, and other small business service providers in the area to develop a detailed triage system for small business clients in their area. In order to better coordinate services, the WBC and other Wisconsin business assistance providers developed a flow chart to help service providers divide resources and determine where to refer customers. In some cases, we found that the SBA district office was active in the coordination effort and participated in regular meetings or organized events that included all of the programs. Several WBCs were co-located with an SBDC, allowing the two programs to benefit from shared office space and other resources. However, our preliminary review also found that some WBCs experienced challenges in their attempts to coordinate services with SBDC and SCORE. Some WBCs told us that coordinating services could be difficult. Several WBCs told us that they had considered co-locating or sharing space with an SBDC or SCORE chapter in order to reduce costs but feared that co- location would inhibit the WBC’s ability to maintain its identity and reach its target client group of low-income women. WBCs and SBDCs are both measured on the number of clients that participate in small business training and counseling services, and one WBC told us that co-location would cause WBCs to compete for clients. Also, in some instances SBA encouraged WBCs to provide services similar to those that SBDCs were already providing to small businesses. For example, one WBC told us that staff were encouraged to develop a government procurement curriculum although an SBDC in their area was already providing this service to small business clients. These concerns and uncertainties thwart coordination efforts and could increase the risk of service duplication in some geographic areas. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Committee may have. For additional information about this testimony, please contact William B. Shear at (202) 512-8678 or Shearw@gao.gov. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Kay Kuhlman, Assistant Director; Bernice Benta, Michelle Bracy, Tania Calhoun, and Emily Chalmers. This is a work of the U.S. government and is not subject to copyright protection in the United States. This published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Small Business Administration (SBA) provides training and counseling services to women entrepreneurs through the Women's Business Center (WBC) program. With approximately $12 million in fiscal year 2007, SBA funded awards to 99 WBCs. However, Congress and WBCs have expressed concerns about the uncertain nature of the program's funding structure. Concerns have also been raised about the possibility that the WBC and two other SBA programs, the Small Business Development Center (SBDC) and SCORE programs, are duplicating each other's efforts. This testimony discusses preliminary views on (1) uncertainties associated with the funding process for WBCs; (2) SBA's oversight of the WBC program; and (3) actions that SBA and WBCs have taken to avoid duplication among the WBC, SBDC, and SCORE programs. GAO reviewed policies, procedures, examinations, and studies related to the funding, oversight, and services of WBCs and interviewed SBA, WBC, SBDC, and SCORE officials. Until 2007, WBCs were funded on a temporary basis for up to 10 years, at which time it was expected that the centers would become self-sustaining. Beginning in 1997, SBA made annual awards to WBCs for up to 5 years. Because of concerns that WBCs could not sustain their operations without continued SBA funding, in 1999, Congress created a pilot program to extend funding an additional 5 years. Due to continued uncertainty about WBCs' ability to sustain operations without SBA funding, in May 2007, Congress passed legislation authorizing renewable 3-year awards to WBCs that "graduated" from the program after 10 years, as well as to current program participants. Like the current awards, the 3-year awards are competitive, and more centers may be applying for limited dollars. SBA is currently revising its award process to incorporate the new program changes. Though SBA has oversight procedures in place to monitor WBCs' performance and use of federal funds, staff shortages from the agency's downsizing and limited communication may hinder SBA's oversight efforts. SBA relies extensively on district office technical representatives (DOTRs) to oversee WBCs, but these staff members also have other job responsibilities and may not have the needed expertise to conduct some oversight procedures. SBA provides annual training and has taken steps to adjust its oversight procedures to adapt to staffing changes, but concerns remain. Some WBCs also cited communication problems, and one study reported that 54 percent of 52 WBCs responding to the study's survey said that SBA could improve its communication with the centers. For example, some WBCs told us that SBA did not provide sufficient feedback on their performance. Under the terms of the WBC award, the centers are required to coordinate with local SBDCs and SCORE chapters. SBA officials told us that they expected district offices to ensure that the programs did not duplicate each other. However, based on our preliminary review, we found that SBA provided limited guidance on how to successfully carry out coordination efforts. Most of the WBCs that we spoke with explained that in some situations they referred clients to an SBDC or SCORE counselor, and some WBCs also took steps to more actively coordinate with local SBDCs and SCORE chapters to avoid duplication and leverage resources. However, some WBCs told us that coordinating services was difficult, as the programs were each measured by the number of clients served and could end up competing for clients. Such concerns thwart coordination efforts and could increase the risk of duplication in some geographic areas.
All states, the District of Columbia, U.S. territories, and some Indian tribes have laws and/or codes requiring convicted sex offenders to register with local or state law enforcement authorities, the purpose of which is to enhance public protection and provide an additional investigative tool to law enforcement agencies. As mentioned previously, in most states, the sex offender registry is centrally maintained by a state criminal justice agency, such as the state police or a department of public safety. Between 1994 and 2003, Congress passed a series of laws requiring states to establish sex offender registries to be eligible to receive certain federal funds. More recently, in 2006, Congress passed the Walsh Act to provide more consistency nationwide among the states’ sex offender registration programs and to make it more difficult for sex offenders to evade monitoring. The Walsh Act requires states to modify their registration systems in accordance with a comprehensive set of minimum standards, or risk losing a percentage of certain federal grant program funds. These minimum standards include who must register, what information must be in the registries, how often registrants must reappear in person to verify their registration information and have new photographs taken, the number of years that offenders must maintain their registration, and the penalties for failure to register. To be useful, sex offender registry information must be current and complete. Under the Walsh Act, sex offenders who change their name, residence, or employment or student status must, within 3 business days, appear in person in at least one jurisdiction involved and provide notice of all changes. That jurisdiction must immediately provide updated information to all other jurisdictions in which the sex offender is required to register. To monitor the interstate movement of offenders, states participate in NSOR, which was activated in 1999 as a component of the FBI’s National Crime Information Center. NSOR enables law enforcement agencies to share information across states. For instance, law enforcement can run name checks against NSOR to identify sex offenders who have failed to register after moving from one state to another. NSOR is available to law enforcement only and allows for the use of extensive personal identifying information, including alias identifications, in queries for potential matches. According to the FBI, to meet the needs of law enforcement in dealing with offenders who may provide false information upon being arrested, each NSOR record can contain up to 100 names, 10 Social Security numbers, and 10 dates of birth. Convicted sex offenders who fail to satisfy registration requirements are subject to state or federal prosecution. The Walsh Act requires states to impose criminal penalties (including a maximum term of imprisonment that is greater than 1 year) on sex offenders who fail to comply with registration requirements. In addition, the Walsh Act makes failure to comply with registration requirements a federal crime (punishable by up to 10 years in prison) for sex offenders who travel between states or Indian tribal jurisdictions, or whose registrable offenses are for federal, D.C., Indian, or territorial crimes. Noncompliance by offenders released to community supervision generally may also be punishable by revocation of release. Despite the potential for prosecution or revocation of release, ensuring compliance with registration requirements is a significant challenge. According to the Center for Sex Offender Management, every state is grappling with problems regarding the accuracy of their sex offender registries. To better ensure compliance with registration requirements, some observers have called for more interagency collaboration—to include, in particular, a role for motor vehicle agencies. The National Center for Missing and Exploited Children has advocated flagging driver-license and vehicle registration files of sex offenders as a way to keep law enforcement updated on address changes and other personal data. In addition, following establishment of NSOR in 1999, the FBI encouraged states to take advantage of the national registry. For instance, in guidance distributed in 1999 to all states’ sex offender registry points of contacts, the FBI noted that, upon issuing new driver’s licenses, motor vehicle agencies could initiate a check of NSOR and provide the results (i.e., possible hits) to an authorized criminal justice agency for investigation to verify the identity of the individuals and determine whether they were required to register as sex offenders under the applicable state’s laws. Aside from the prospective screening process discussed in section 636 of the Walsh Act, the states say they are faced with extensive demands in implementing other federal legislation, the REAL ID Act, which creates national standards for the issuance of state driver’s licenses and identification cards. For example, the REAL ID Act contemplates the use of five national electronic systems to facilitate verification, but currently only one of these systems is available on a nationwide basis. As of July 2007, 22 of the nation’s 50 states were using some form of driver’s license-related process to encourage registration or provide additional monitoring of convicted sex offenders. Generally, as indicated in figure 1, these states and processes can be grouped among five categories—(1) mandatory-identification states, (2) annual-renewal states, (3) license-suspension states, (4) license-annotation states, and (5) cross- validation states. As shown in figure 1, nine states—Alabama, Arizona, Delaware, Florida, Indiana, Louisiana, Michigan, Mississippi, and Texas—specifically require convicted sex offenders to obtain either a driver’s license, an identification card, or a sex offender registration card issued through driver’s license- related processes. Further, five of these nine states—Alabama, Delaware, Florida, Louisiana, and Mississippi—also label the applicable driver’s license, identification card, or registration card with an annotation that identifies the person as a sex offender. Two other states, Kansas and West Virginia, also have an annotation requirement, although these states do not specifically require convicted sex offenders to obtain either a driver’s license, an identification card, or a sex offender registration card. Two of the seven “annotation” states, Kansas and Louisiana, provided us examples of their annotated driver’s licenses and identification cards (see fig. 2). In seven states, the duration of time between driver’s license renewals for convicted sex offenders is 1 year (in contrast to a multiyear duration period typical for other residents). The renewal requirement can be more frequent, for example, Mississippi law requires renewals every 90 days for convicted sex offenders. And, under Florida law, within 48 hours after any change of address in permanent or temporary residence (or a change of name because of marriage or other legal process), convicted sex offenders are required to report in person to a driver’s license office of the Florida Department of Highway Safety and Motor Vehicles to update either a driver’s license or an identification card. Mississippi was one of the first states in the nation to have a sex offender registration process that broadly utilizes motor vehicle agency or driver’s license services. Since 2005, Mississippi has required convicted sex offenders to personally appear every 90 days at any driver’s license office in the state and obtain a sex offender registration card (similar to a driver’s license) with updated information (e.g., photograph and address), which is then forwarded to the state’s sex offender registry. According to state officials, this process works efficiently in part because the Mississippi Highway Patrol (which is responsible for driver services) and the Mississippi Bureau of Investigation (which is responsible for maintaining the sex offender registry) are under a single agency, the Mississippi Department of Public Safety. Also, the officials noted that the capability to use driver’s license offices to register offenders was particularly feasible, given that offices are located in 80 of the state’s 82 counties—and all offices are equipped to enter data, update records, take photographs, and issue cards. For additional information about the 22 states’ driver’s license-related processes to encourage registration or provide additional monitoring of convicted sex offenders, see appendix II. Nevada is the one state that screens all applicants against a state sex offender registry before issuing a driver’s license or identification card; however, Nevada does not screen against the FBI’s NSOR. Thus, while Nevada’s screening process most closely reflects the potential screening capability discussed in section 636 of the Walsh Act, Nevada’s screening process would not detect a sex offender who moved to Nevada from another jurisdiction without permission and did not register in the state (an “absconder”). Absent other cross-validation information, the offender could possibly receive a regular driver’s license (or identification card) with a 4-year expiration term. Nevada’s screening process is based on state law enacted in 2005—Nevada Senate Bill 341, Chapter 507 of Statutes of Nevada 2005. Pursuant to this law, beginning July 1, 2006, the Department of Motor Vehicles may not issue or renew a driver’s license to a convicted sex offender until the Department of Motor Vehicles has received information from the Department of Public Safety’s central repository or other satisfactory evidence indicating that the convicted sex offender is in compliance with registration requirements. Under the process subsequently developed and used to implement the requirements of this statute, all individuals applying for an initial driver’s license, requesting a driver’s license renewal, or requesting other services (e.g., a duplicate license or an update of information) from the Nevada Department of Motor Vehicles are screened against the state’s sex offender registry. To initiate the screening process, the Department of Motor Vehicles electronically transmits a query to the Department of Public Safety’s central repository, which conducts a search against the state’s sex offender registry records. The data elements used in the screening or matching process to determine if the applicant is a sex offender are the applicant’s last name, first name, date of birth, and Social Security number—and, if applicable, a previously issued operator license number (driver’s license number), including the state of issuance. Using these data elements, the Department of Public Safety conducts the search and provides a single-digit response (0, 1, or 2) to the Department of Motor Vehicles—normally within 15 seconds, according to state officials. The applicable single-digit responses and related actions are as follows (see fig. 3): 0: No definitive match to a single record in the sex offender registry. The Department of Motor Vehicles follows its normal procedures and issues the individual a 4-year license. 1: Positive match to a record in the sex offender registry, and the person is in compliance with registration requirements. The Department of Motor Vehicles issues the individual a 1-year license (versus the standard 4-year license). 2: Positive match to a record in the sex offender registry, and the person is not in compliance with registration requirements. A driver’s license is not issued. Rather, the Department of Motor Vehicles gives the noncompliant offender a standardized, 1-page printout of instructions that outlines how to resolve the matter. Nevada’s automated process does not require customer-service employees at the Department of Motor Vehicles to review or interpret responses to the query. For example, when an applicant is identified as a compliant sex offender, the system software automatically adjusts the driver’s license expiration date to 1 year. On the other hand, when the system identifies an individual as a noncompliant sex offender, it produces the standardized printout of instructions rather than a driver’s license. In Nevada, convicted sex offenders must appear in person at a Department of Motor Vehicles office to conduct all driver’s license-related services. Offenders are not permitted to obtain or renew a license by mail or Internet, nor by use of interactive computers (kiosks) available at licensing locations. To ensure that convicted sex offenders do not circumvent the in-person requirement, all mail, Internet, and kiosk transactions are screened against the state’s sex offender registry. If a positive match results, the individual is informed that the transaction must be conducted in person at a Department of Motor Vehicles location, irrespective of whether the individual is or is not in compliance with registration requirements. In our survey of 26 states, most of the responding motor vehicle agencies and sex offender registries reported that (1) moderate or major modifications to their current IT systems would be needed to screen driver’s license applicants against the respective state’s sex offender registry and the FBI’s NSOR before issuing a license and (2) the most significant cost factor would be software modifications. Also, the agencies generally indicated that reliable cost estimates for establishing the prospective screening system discussed in section 636 of the Walsh Act cannot be calculated until the system’s operational requirements or business rules are clearly defined. As figure 4 shows, 30 of 38 state agencies (in the 25 states that responded to our survey) reported that moderate or major modifications to their current IT systems would be needed to establish the type of real-time screening process discussed in section 636 of the Walsh Act. Moreover, most of the responding state agencies identified development of software, telecommunications, and business rules as key factors that could affect the overall costs of the modifications—and that each of these factors could have a moderate or major impact on the overall costs (see fig. 5). Generally, 33 of the responding agencies indicated that software modifications would be the most significant cost factor. For example, in responding to our survey, officials from a West Coast state’s motor vehicle agency emphasized that major software modifications—i.e., replacing or upgrading information system programming—would be needed to establish the capability to screen driver’s license applicants against the state’s sex offender registry and the FBI’s NSOR. The officials explained that software modifications to support logic changes in calculations of expiration dates (e.g., 1-year versus 5-year)—as well as calculations of the applicable fees to collect—would be needed in seven automated systems, which are interrelated. For example, according to the officials, two of the interrelated systems are (1) the automated system that supports driver’s license and identification card issuance, revenue processing, and workload reporting and (2) the data warehouse where activity to driver records is stored and used for analysis and statistical reporting. Thus, because numerous automated systems would be affected, the state officials reiterated that establishing the proposed screening process would be a major effort, requiring a project manager and both internal resources and contractor support. Moreover, given competing demands, the officials expressed concerns about taking on another major project. For instance, the officials noted that the agency was in the process of implementing various other IT projects, in addition to efforts related to implementing REAL ID Act requirements. Similarly, officials from another state’s motor vehicle agency responded that the proposed screening process would necessitate major software modifications to the agency’s automated systems. The officials explained that the software used to issue licenses and the software used to account for fees collected are intertwined, and each is governed by complex rules and procedures that are not easily changed without affecting the entire program. Among other complexities, the officials noted that the state issues various types of driver’s licenses (e.g., private licenses, commercial and non-commercial licenses, conditional licenses, ignition interlock licenses, etc.) and that the fees vary by type of license. Further, the state officials commented that—given competing demands for programming resources, most notably demands generated by the REAL ID Act—the agency would be in no position for several more years to even begin making the needed software modifications associated with the prospective screening system. To support the prospective screening process discussed in section 636 of the Walsh Act, state motor vehicle agencies said they would need an electronic telecommunication interface not only with the respective state’s sex offender registry agency, but also with the FBI’s NSOR. In 23 of the 26 states we surveyed, separate organizational entities are responsible for issuing driver’s licenses and maintaining the respective state’s sex offender registry—and, these states reported that no electronic telecommunication interface currently exists to facilitate the real-time exchange of data. Thus, they reported that the interface capability would have to be established and would be another key cost factor. As figure 5 further indicates, the state agencies also reported that another key cost factor is the prospective screening system’s business rules, which should specify operational and functional requirements. Further, the states indicated that reliable cost estimates for establishing the prospective screening system discussed in section 636 of the Walsh Act cannot be calculated until the system’s business rules are clearly defined. For example, regarding the prospective screening system, states said that business rules should fully describe the software functionality to be delivered, including what algorithm to use in determining whether a driver’s license applicant is a person listed in the sex offender registry and what happens when an applicant is matched to more than one record in the sex offender registry. We have found in past work that establishing well-defined business rules is critically important in being able to make reliable cost estimates for any IT system or project. Decisions made on business rules can have far-reaching impacts on all aspects of project delivery, including costs. Among other considerations, requirements should fully describe the software functionality to be delivered. Studies by GAO and others have shown that problems associated with requirements definition are key factors in software projects that do not meet their cost, schedule, and performance goals. Beyond the IT and cost issues discussed in the previous section, successful implementation of a driver’s license screening program for sex offenders will also hinge on how well the program incorporates key design considerations. Developing an effective nationwide screening program could be a daunting challenge given the different processes, procedures, databases, and operational environments in the motor vehicle and law enforcement agencies across the nation. In addition to the various IT issues noted earlier, our conversations with federal, state, and AAMVA officials identified key operational challenges that could affect the successful implementation of the type of screening program discussed in the Walsh Act. For example, a recurring observation by the motor vehicle agency officials we contacted is that their offices are already overburdened. Some states are still addressing earlier federal mandates such as the requirements of the Motor Carrier Safety Improvement Act of 1999, while, as noted earlier in this report, implementing the requirements of the REAL ID Act is also proving difficult for states. At the same time, state legislatures are tasking motor vehicle agencies with new responsibilities that generate demands on programming and other resources, according to the state agencies we surveyed. Consequently, a key concern expressed by state and AAMVA officials is that a sex offender screening process could become an unfunded mandate and be difficult, if not impossible, for states to execute on their own because their budgets are already strained. Moreover, the efficient operation of the screening process could be problematic given the different entities that would need to be integrated. For example, according to AAMVA, some state motor vehicle agencies are independent, while others are under a state’s Department of Revenue, Department of Public Safety, or Department of Transportation. Likewise, in some jurisdictions, state-issued identification cards, which could be used to track those individuals who lack driver’s licenses, are issued by non-motor vehicle agencies. Consequently, these different entities would need to coordinate with one another and share information for the screening process to function effectively. Individual motor vehicle offices within a particular state can differ as well, depending, for example, on whether they are located in urban or more remote locales. This in turn could affect the number of staff or physical space available to carry out the screening program. Some populations could present unique screening challenges. For example, according to AAMVA, in urban locations such as Manhattan, a number of residents do not have driver’s licenses and would need to obtain identification cards in order to be included in the screening process. Further, according to the director of the Department of Justice’s Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART) Office, Indian tribes differ as to how they handle driver’s licenses. Some tribes issue their own driver’s licenses, whereas other tribes rely on applicable state agencies. Another challenge noted by the director is that records of Indian tribal court convictions of sex offenders may not be readily accessible for screening because such records generally are not contained in either state or national sex offender registries. State and AAMVA officials also underscored the importance of not adversely affecting the main mission of the motor vehicle agencies. Although they saw value in having the motor vehicle agencies support efforts to track and monitor sex offenders, and acknowledged that the public would feel safer knowing that states were taking these additional steps to ensure sex offenders were complying with state registration requirements, the officials were also concerned that taking on this role could divert resources from the agencies’ core business functions and impair customer service. A final challenge we heard was that state motor vehicle agencies usually do not have access to sex offender registry or criminal history records because they are not considered to be law enforcement entities. As a result, legislative or administrative action at the state level would be needed to authorize that access. At the federal level, FBI Criminal Justice Information Services Division officials explained that non-law-enforcement agencies (such as state motor vehicle agencies) traditionally have not been authorized by statute to access federal databases that contain criminal history records, such as the National Crime Information Center and NSOR. However, according to the Department of Justice’s Office of Legal Policy, the Attorney General now has the authority under the Walsh Act to make determinations regarding access to NSOR. Still, the Office of Legal Policy commented that the department generally does not act unilaterally. Rather, the preference is to use the FBI Criminal Justice Information Services Division’s advisory process, which includes obtaining input from the user community (federal, state, and local law enforcement). The numerous challenges to implementing the sex offender screening program discussed in the Walsh Act highlight the importance of a sound design that can function effectively in the different operating environments found in the public safety agencies, motor vehicle agencies, and other offices across the nation that would be involved in the screening process. In particular, based on our interviews with federal, state, and AAMVA officials, to the extent the government moves forward with a sex offender screening program that employs driver’s license processes, it will be important that the screening program be designed to (1) minimize the burden on states, (2) maintain customer service, (3) mitigate unintended consequences, and (4) communicate timely and actionable information on noncompliant offenders to law enforcement personnel. Also, another important design consideration, as indicated by our prior work on internal controls, is to provide a basis for assessing the effectiveness of the screening program. These design considerations could affect the successful implementation of the screening program and its costs. State and AAMVA officials described various ways they believed would reduce the implementation and operational challenges that could result if the screening process discussed in the Walsh Act were executed nationwide. Their suggestions centered on exploring potential efficiencies that could be gained from leveraging existing IT infrastructure, batch processing agency records, and coordinating the implementation of a sex offender screening program with implementation of the REAL ID Act, to the extent feasible. Each is discussed in greater detail below. Leverage Existing IT Infrastructure: In response to our survey, nine states indicated that using driver’s license processes to encourage convicted sex offenders to comply with registration requirements might be achieved more efficiently by expanding or leveraging IT systems used for existing administrative functions rather than by developing a new telecommunication system from the ground up to link motor vehicle offices, state law enforcement agencies, and the FBI’s NSOR. In particular, AAMVA and state officials noted that AAMVA operates a secure, private data services network that already links the motor vehicle agencies of all 50 states and the District of Columbia. Although U.S. territories and Indian tribes are not part of this system and would need to be connected (thus incurring additional costs), and additional resources would be required to operate the system, AAMVA officials said the system possibly could form the needed conduit between state and federal agencies at a lower cost. Batch Process Agency Records: According to officials in three of the states we contacted, batch processing motor vehicle agency records against sex offender registries is a potentially less costly alternative to the real-time screening process currently performed in Nevada and discussed in the Adam Walsh Act. With batch processing, driver’s license applications from multiple individuals would be bundled together into batches, and the screening process would be executed during the evening or other less-busy times. With a real-time screening process, each driver’s license applicant is matched against the sex offender registry in the course of over-the-counter transactions. Officials in one of the three states noted that batch processing could have several advantages, such as significantly simplifying IT requirements, reducing development and ongoing costs, and relieving motor vehicle agency employees from any direct confrontations with sex offenders. Officials from the FBI’s Criminal Justice Information Services Division echoed this point, noting that batch processing could facilitate a more efficient allocation of resources because it does not require immediate attention by staff. In comparison, real-time screening requires staff to research each query, potentially interrupting their performance of other tasks. Coordination with Implementation of the REAL ID Act, to the Extent Feasible: As noted earlier in this report, motor vehicle agency officials frequently mentioned that their systems were already overburdened, and it will be difficult for them to take on new demands. In particular, states noted how implementing the REAL ID Act will be both expensive and technically challenging. For example, compliance with the REAL ID Act’s standards would require states to (1) maintain a motor vehicle database that contains, among other information, all data fields printed on the license or identification card and (2) provide electronic access to all other states to information contained in the motor vehicle database. Although the intent of the REAL ID Act and the sex offender screening process discussed in the Walsh Act are different, states we surveyed reported that they would need to modify their IT systems to implement both. Should the government move forward with the sex offender screening process, officials from both the Department of Justice and AAMVA noted the importance of identifying areas where the modifications needed to implement the screening process overlap with those needed to comply with the REAL ID Act so that both efforts could be integrated and, thereby, avoid requiring states to upgrade their IT systems a second time. Officials we contacted in 15 states, as well as AAMVA officials, were concerned that using driver’s license-related processes to monitor sex offenders could divert resources from motor vehicle agencies’ core business functions. Further, several of these states, as well as AAMVA officials, noted that this could in turn result in longer lines and increased workloads for staff. Additionally, offenders may be unruly or violent, a situation that could jeopardize the safety of staff and customers. Law enforcement personnel might be needed on site, a requirement that could be problematic for motor vehicle offices that are small or are located in remote locations. Agency staff might need special training because they would be assuming a law enforcement function. Also, providing explanations to noncompliant offenders may require more physical space for privacy than currently available in over-the-counter settings, which could take away space needed for other purposes. During our study of Nevada’s screening approach, agency officials told us that maintaining the traditional operations of the state’s motor vehicle agency was both an important concern and significant challenge. For example, the Driver’s Program Manager at the Nevada Department of Motor Vehicles noted that the department was opposed to any system that would have required its customer-service staff to function as enforcers of the law when interacting with convicted sex offenders, and the screening approach was designed accordingly. For example, under Nevada’s approach, customer-service employees in the state’s motor vehicle agency do not review or interpret the results of searches against the state’s sex offender registry. Rather, Nevada’s system automatically adjusts the driver’s license expiration date to 1 year for a compliant sex offender or, for a potentially noncompliant offender, automatically prints a set of instructions of additional actions to take, rather than issue a driver’s license. Despite the potential benefits of driver’s license-related processes for monitoring sex offenders, many of the state and AAMVA officials we contacted identified several unintended consequences that might result if not given adequate attention. For instance, they said that sex offenders could go “underground” and drive without a valid license, because they might view a 1-year license—which contrasts to the multiyear license typically issued to the general public—as being the equivalent of a “scarlet letter” as the holders would be identifiable to the public as sex offenders. Additionally, Nevada law enforcement officials noted that the state’s new screening process could actually reduce compliance because it imposes additional registration costs on convicted sex offenders. According to these officials, compliant offenders need to pay the license renewal fee ($21.25 for a driver’s license or $86.25 for a commercial driver’s license) annually, rather than every 4 years as is the case with the general public. Finally, while states such as Alabama and Arizona and six others require convicted sex offenders to obtain and have in their possession a driver’s license or an identification card, other states, including Nevada, lack this requirement. Thus, in this latter group of states, an offender could simply choose not to apply for either document and, thus, not be subject to the screening process. Currently, federal law does not specifically require convicted sex offenders to obtain and have in their possession either a driver’s license or an identification card. However, under section 114 of the Walsh Act (codified at 42 U.S.C. § 16914), the jurisdiction in which a sex offender registers is to ensure that the state’s registry includes “a photocopy of a valid driver’s license or identification card issued to the sex offender by a jurisdiction.” This statutory provision, according to the Department of Justice, is not a mandate for all states to change their laws to require that every convicted sex offender have either a driver’s license or an identification card. Rather, the Department of Justice said that if a convicted sex offender has been issued a driver’s license or an identification card by any jurisdiction, a photocopy of it shall be included in the applicable registry. At the same time, Department of Justice and FBI officials as well as state law enforcement officials noted that the use of names, dates of birth, and other non-biometric identifying information to screen driver’s license applicants against sex offender registry records would undoubtedly result in some false-positive identifications—that is, mistakenly identifying some individuals as being convicted sex offenders. Consequently, procedures would be needed to address incorrect matches. The federal officials said that motor vehicle agencies cannot be expected to prevent or resolve such mistakes, especially if the agencies are provided only a coded, single-digit response to each search query against the FBI’s NSOR. Rather, the federal officials commented that the prospective screening process, if implemented, must involve a law enforcement agency in each state—that is, an agency with sufficient investigative capacity to confirm identities and verify compliance with registration requirements. The importance of some type of mechanism to review potentially incorrect matches was underscored by the results of a 2006 FBI audit, which found widespread problems with the quality of records that states submitted to NSOR. Further, according to FBI officials, the NSOR database does not record whether a convicted sex offender is in compliance with the respective state’s registration requirements. Moreover, compliance status can be difficult to determine because states use different definitions or terminology. Because of these data limitations, the FBI stresses that NSOR is an information file only and does not provide a stand-alone basis for taking official action. As a result, it will be important to determine how to best minimize mistaken identifications and which agency—the motor vehicle office, law enforcement, the courts, or other entity—should manage an appeals process when such “false positives” occur. Another design consideration is how to maximize the screening program’s usefulness to law enforcement agencies when noncompliant offenders are identified. This is important because local law enforcement personnel generally have the primary responsibility for monitoring offenders and investigating possible failures to comply with registration requirements. In one state, for example, local law enforcement officials told us that immediate notification once a noncompliant offender is identified could help to ensure efficient use of limited resources and enhance compliance with registration requirements. In another state, however, officials noted that knowing an offender’s whereabouts and having the personnel to take action are two separate and distinct issues. The officials pointed out that many law enforcement agencies are already understaffed and would not be able to respond to “hits” without additional resources. In short, simply identifying noncompliant sex offenders will not necessarily result in their arrest or prosecution. Regardless of the screening approach chosen, our prior work on federal internal controls suggests that it will be important to be able to assess its effectiveness. Indeed, performance measures and indicators are critical for evaluating and controlling operations, and managers need operational data to determine if a specific program is achieving intended results. That said, the screening program’s influence on compliance could be difficult to measure because the screening program is but one of various factors that can affect sex offenders’ behavior. Another factor, for example, is the extent to which convicted sex offenders are subject to specialized supervision in communities and the adequacy of such supervision. Indeed, directly attributing any change in compliance rates solely to a state’s driver’s license-related process is not possible without controlling for these other factors. Further, to ensure reliable data, key terms such as “compliance” and the methodology for calculating compliance rates would need to be articulated as well. Driver’s license-related screening processes could, in concept, help improve the level of compliance with state sex offender registration requirements as well as enhance monitoring. For example, if properly designed, such screening processes could help prevent sex offenders in one state from evading detection simply by moving to another state. However, whether the most feasible approach would be a real-time process similar to that discussed in the Adam Walsh Act or some other method such as batch processing, remains an open question. Indeed, no state currently has operational experience with the type of real-time screening process discussed in the Walsh Act—a process whereby all driver’s license applicants would be screened against the respective state’s sex offender registry and the FBI’s NSOR before issuance of a license. Moreover, our study found that designing and implementing such a screening process would require states to modify their IT systems and make other changes—changes that could be costly and divert resources from other priorities. Further, the screening process could have efficiency and other operational implications for motor vehicle offices at a time when they are facing other demands with finite resources. Beyond basic design considerations, the results of our work highlight the many questions that surround the most cost-effective way of screening sex offenders using driver’s license processes. As AAMVA and state officials have pointed out, before moving forward, business rules or functional requirements would need to be defined, initial and long-term costs would need to be more precisely estimated, and various operational challenges would need resolution. To help shed more light on these issues, and inform decisions on how best to proceed, it will be important to have better data. Decisions on the most optimal approach to pursue—and, if applicable, how best to integrate the design considerations discussed in this report—likely would necessitate collaboration among various stakeholders, including interested states and AAMVA as well as relevant Department of Justice components—particularly the FBI, which manages NSOR, and the SMART Office, which is responsible for administering the standards for the sex offender registration and notification program set forth in the Walsh Act. We provided a draft of this report for comment to the Department of Justice. Also, we provided a draft of this report to the Nevada Department of Motor Vehicles, the Nevada Department of Public Safety, and AAMVA to review for accuracy and clarity. In its written response, the Department of Justice provided technical comments only, which we incorporated in this report where appropriate. In its written response, the Nevada Department of Motor Vehicles stated that it had reviewed the draft for accuracy and clarity and had no comments. The Nevada Department of Public Safety orally informed us that it had no comments. In its written response, AAMVA expressed appreciation that the draft report identified the IT and related costs of adding sex offender registration queries to state motor vehicle agency and AAMVA operations. However, AAMVA emphasized that the following points should be recognized if Congress were to decide to move forward with this type of screening process: A comprehensive study should first be conducted—with a report detailing, scoping out, and finalizing the requirements to build the system and the associated costs. In this regard, states and AAMVA will require funding, and these investments are necessary in advance of operating the system. Consideration should be given to leveraging or using, as a conduit, an existing system—namely, AAMVA’s secure, private network (AAMVAnet), which connects the 50 states and the District of Columbia. Costs would be incurred to connect U.S. territories and Indian tribes, given that these entities currently are not part of AAMVAnet. Also, additional resources would be required to operate the system. Further, AAMVA provided various technical comments, which we incorporated in this report where appropriate. We are providing copies of this report to interested congressional committees, the Attorney General, and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or goldenkoffr@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report were Carla D. Brown, Willie Commons, Christine Davis, Michele Fejfar, Sally P. Gilley, Jeremy L. Hudgeons, Rebecca Kuhlmann Taylor, Marvin G. McGill, Linda S. Miller, James R. Russell, and Shana B. Wallace. Section 636 of the Adam Walsh Child Protection and Safety Act of 2006 (the Walsh Act)—Public Law Number 109-248, enacted July 27, 2006— mandated that we conduct two studies regarding the use of driver’s license-related processes to encourage registration and provide additional monitoring of convicted sex offenders: One mandated study was to focus on the approach recently implemented in the State of Nevada. Under Nevada law effective July 1, 2006, before a driver’s license is issued to a convicted sex offender, the individual must be in compliance with offender registration requirements. A consideration in implementing the new statutory provision in Nevada was the need to develop an electronic interface capability between the Department of Motor Vehicles and the Department of Public Safety, two principal but separate state agencies, in order to screen all applicants for a driver’s license. The other mandated study (a “national” study) was to survey a majority of the states to assess their relative systems capabilities and the potential costs to implement a driver’s license-related process that would screen each applicant against the respective state’s sex offender registry before issuing a driver’s license, as Nevada does, as well as against the national sex offender registry (NSOR), which is maintained by the Federal Bureau of Investigation (FBI). Section 636 of the Walsh Act specified that we complete the Nevada study no later than February 1, 2007, and the national study no later than January 24, 2007, which is “180 days after the date of the enactment of this Act.” To meet these mandated dates, we offered to provide a briefing in January 2007—summarizing the preliminary results to date of our ongoing studies—to the offices of the Chairmen and the Ranking Members of the House and Senate Committees on the Judiciary. Accordingly, during that month, we briefed interested congressional staff. Going forward, as agreed with the offices of the Chairmen and the Ranking Members of the House and Senate Committees on the Judiciary, rather than issuing two separate final reports—i.e., a Nevada report and a national report—we incorporated information regarding Nevada’s driver’s license-screening approach into our survey of the majority of states. In sum, in accordance with the congressional mandate and as discussed with the offices of the Chairmen and the Ranking Members of the House and Senate Committees on the Judiciary, this report addresses the following key questions:In what ways are states using driver’s license-related processes to encourage registration or provide additional monitoring of convicted sex offenders? If a federal law were enacted requiring states to screen individuals against the respective state’s sex offender registry and the FBI’s NSOR before issuing a driver’s license, (a) what level of modifications would states need to make to their information technology (IT) capabilities to comply with such a federal law and (b) what would be the key cost factors to implement and maintain this screening capability? What other factors could affect the successful design and implementation of a process for screening individuals against a state’s sex offender registry and the FBI’s NSOR before issuing a driver’s license? In addressing these questions, we obtained perspectives regarding Nevada’s driver’s license-related process to encourage registration and provide additional monitoring of convicted sex offenders. We reviewed the legislative history of Chapter 507 of Statutes of Nevada 2005, and we discussed the statutory provisions with staff of the Nevada Legislative Counsel Bureau, the agency that provides research, fiscal information, and other services for the state legislature. Also, at the Nevada Department of Motor Vehicles and the Nevada Department of Public Safety, we obtained and analyzed pertinent documentation such as the state’s IT system specifications regarding implementation of the law, and we interviewed responsible officials. Further, to obtain additional information regarding significant challenges or lessons learned in implementing Chapter 507 of Statutes of Nevada 2005, we contacted several local law enforcement agencies. Specifically, we contacted the district attorney’s office and the major city police department in Clark County and Washoe County. We chose these locations because they are the state’s two most populous counties. Clark County includes the City of Las Vegas and is the most populous of Nevada’s 17 counties, with 1.8 million residents and 70 percent of the state’s population, and Washoe County includes Reno, which is the state’s second largest city. Also, in conjunction with our site visit to the Nevada Department of Motor Vehicles and the Nevada Department of Public Safety in the state capital (Carson City), we contacted the Carson City Sheriff’s Department. We also surveyed 26 other states to obtain additional perspectives on the screening process discussed in section 636 of the Walsh Act. Specifically, we contacted motor vehicle agency and sex offender registry officials in 26 states, a nonprobability sample, selected to reflect regional representation across the nation and a range in the number of sex offender registrants (e.g., small, medium, and large), as well as states with and without some type of driver’s license-related process for monitoring sex offenders (see table 1). Information obtained from the states we surveyed cannot be generalized to all states. More details about the scope and methodology of our work in addressing each of the three key questions are presented in the following sections, respectively. To identify states that have statutory requirements for using driver’s license-related processes to encourage registration or provide additional monitoring of convicted sex offenders, we reviewed states statutes through the end of July 2007. Thus, this report does not reflect any state statutory provisions enacted after July 2007. Also, in identifying states that have driver’s license-related processes for monitoring convicted sex offenders, we did not include any state whose motor vehicle agency’s only role was notification, such as use of a driver’s license application form that contains a statement informing convicted sex offenders of the duty to register. In addition, because we did not confirm our statutory research by interviewing state officials, our interpretation of the statutory requirements does not include any information on how states may be implementing these requirements. In addition to identifying states that have statutory requirements for using driver’s license-related processes for monitoring convicted sex offenders, we also identified some states that have agency-initiated processes by reviewing the Web sites of agencies responsible for either maintaining the respective state’s sex offender registry or issuing driver’s licenses. Further, we reviewed documentation obtained from and interviewed officials at the American Association of Motor Vehicle Administrators (AAMVA) and the National Conference of State Legislatures. These identifications may not be exhaustive because we did not contact motor vehicle and law enforcement agencies in all 50 states to specifically inquire about the availability or use of agency-initiated processes. Rather, our identification of the agency-initiated processes was an ancillary result of the work we conducted to identify states that have statutory requirements for using driver’s license-related processes and to otherwise address the objectives of our mandated study. To determine what level of modifications would be needed to states’ IT capabilities to comply with a prospective federal law that would require screening individuals against the respective state’s sex offender registry and the FBI’s NSOR before issuing a driver’s license and to determine the key cost factors in implementing and maintaining this screening capability, we conducted a survey of a majority (26) of the states, which involved contacting motor vehicle agency and sex offender registry officials in each of the states (see table 1). In surveying the 26 states, we developed, pretested, and implemented a telephone questionnaire to collect information. To help ensure substantive discussions, we distributed the questionnaire to applicable state officials in advance of our meetings with them. Most of these meetings were conducted by telephone conference, although we made in-person visits to Nevada and three other states (Delaware, Georgia, and Maryland). We selected these three states because Delaware has a driver’s license-related screening process while Georgia and Maryland do not. We also wanted to obtain additional perspectives on factors affecting the state’s decisions to implement or not implement a screening process. The questionnaire asked motor vehicle agency and sex offender registry officials for their perspectives on the level of modifications to IT systems—minimum, moderate, or major—that would be needed to establish the screening process discussed in section 636 of the Walsh Act. Also, the questionnaire solicited information regarding the various cost factors that might affect the modification and the relative impact that each factor might have on the overall cost of the modification. We received responses to the questionnaire from motor vehicle agencies and/or sex offender registries in 25 of the 26 states. In total, the 25 states provided 41 responses, which consisted of responses from 20 motor vehicle agencies and 17 sex offender registry agencies, plus responses from 4 states that each presented the combined views of the respective state’s motor vehicle and sex offender registry agencies. Some of the 41 respondents did not answer every item in the questionnaire, so not all reported responses are based on 41 agencies’ information. For example, the data presented in figure 4 were based on answers from 38 respondents, and the data in figure 5 from 39 respondents. Further, as an additional source for obtaining perspectives on IT capabilities and costs regarding the screening process discussed in section 636 of the Walsh Act, we interviewed officials from the Department of Justice’s Office of Justice Programs. Also, we contacted AAMVA to explore options for expanding or leveraging existing IT capabilities to screen sex offender registries before issuing driver’s licenses. To determine what other factors, in addition to IT capabilities and costs, could affect the successful design and implementation of a process for screening individuals against a state’s sex offender registry and the FBI’s NSOR before issuing a driver’s license, we relied largely on the information we obtained during our review of Nevada’s driver’s license screening process and from our contacts with motor vehicle agency and sex offender registry officials in the 26 survey states (see table 1). For instance, in contacting motor vehicle agencies, we obtained perspectives regarding the potential effects of the proposed screening process on the traditional operations of the agencies—as well as the implications of other demands, particularly the requirements of the REAL ID Act, which creates national standards for the issuance of state driver’s licenses and identification cards. Also, we contacted the FBI’s Criminal Justice Information Services (CJIS) Division, which is responsible for managing NSOR and other component files of the National Crime Information Center. In contacting the CJIS Division, we interviewed program managers and reviewed documentation regarding the capacity or capability of the National Crime Information Center and NSOR to handle the volume of searches that could be anticipated if a federal law were enacted, the potential for and implications of false negatives or false positives stemming from name- based searches, and other relevant issues or concerns. Regarding the quality and completeness of NSOR records, we reviewed the results of the most recent audits conducted by the FBI’s CJIS Audit Unit. These audits— conducted in a two-part cycle that ended in April 2005 and September 2006, respectively—covered records submitted by 49 states, the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands. Moreover, from the FBI’s CJIS Division, we obtained views regarding periodic batch processing of motor vehicle agency records against sex offender registries as a possible alternative to the real-time screening of driver’s license applicants. We followed up with applicable states to discuss the potential advantages and disadvantages of such batch processing versus real-time processing. Further, to obtain additional law enforcement perspectives on the screening process discussed in section 636 of the Walsh Act, we contacted the U.S. Marshals Service. Among other responsibilities, under section 142 of the Walsh Act, the Attorney General is required to use the resources of federal law enforcement, including the U.S. Marshals Service, to assist jurisdictions in locating and apprehending sex offenders who violate sex offender registration requirements. We conducted this performance audit from July 2006 through December 2007 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents an overview of driver’s license-related processes that states use for encouraging registration or providing additional monitoring of convicted sex offenders. Such processes may be based on statutory requirements or on agency initiatives, as indicated in the following respective sections. To identify states that have statutory requirements for using driver’s license-related processes for encouraging registration or providing additional monitoring of convicted sex offenders, we reviewed states statutes through the end of July 2007. Table 2 presents the results of our research. As shown, we identified a total of 20 relevant state statutes as of July 2007. Given the degree of legislative interest in monitoring sex offenders, as indicated by the relatively recent effective dates of many of the state statutes, table 2 should be considered a snapshot of state information as of July 2007, with changes likely to occur in the future. In addition, because we did not confirm our statutory research by interviewing state officials, table 2 reflects our interpretation of the statutory requirements we identified and does not include any information on how states may be implementing these requirements. As table 2 indicates, the 20 states reflect various types of driver’s license- related processes for encouraging registration or providing additional monitoring of convicted sex offenders. Generally, these processes can be grouped within the following five categories: Mandatory-Identification States: Alabama, Arizona, Delaware, Florida, Indiana, Louisiana, Michigan, Mississippi, and Texas require convicted sex offenders to obtain identification in the form of a driver’s license, an identification card, or a sex offender registration card issued through driver’s license-related processes. Of these states, Alabama, Arizona, Indiana, and Louisiana explicitly require convicted sex offenders to carry their identification. Annual-renewal States: In Arizona, Illinois, Kansas, Louisiana, Oklahoma, Texas, and Utah, the driver’s licenses or identification cards of convicted sex offenders expire annually and are subject to annual renewal. Of these states, Arizona, Louisiana, and Texas mandate that sex offenders obtain a driver’s license or an identification card, making the annual renewal process mandatory. In the remaining states, annual renewal is contingent upon the sex offender’s decision to maintain a valid driver’s license or identification card, though penalties would apply for driving with an invalid license. License-Suspension States: In Illinois, Massachusetts, Mississippi, Nevada, and North Carolina, the motor vehicle agency must suspend, cancel, or refuse to issue or renew the driver’s license or identification card of a sex offender who is not in compliance with registration requirements. License-Annotation States: Alabama, Delaware, Florida, Kansas, Louisiana, Mississippi, and West Virginia label the applicable driver’s license, identification card, or registration card with an annotation that identifies the holder as a sex offender. Cross-Validation States: State law enforcement entities in Arizona, Colorado, Florida, New Hampshire, and Virginia use motor vehicle agency records to help ensure the accuracy or currency of addresses in the respective state’s sex offender registry. Generally, as discussed in appendix I, our identification of states that have agency-initiated processes for using driver’s license-related processes for encouraging registration or providing additional monitoring of convicted sex offenders was an ancillary result of the work we conducted to (1) identify states that have statutory requirements for such processes and (2) otherwise address the objectives of our mandated study. Thus, the listing in table 3 is not intended to be exhaustive because we did not contact motor vehicle and law enforcement agencies in all 50 states to specifically inquire about the availability or use of agency-initiated processes. One of the three states listed in table 3—Florida—is also included in table 2. We found that Florida supplements its statutory cross-validation procedures with additional agency-initiated cross-validation procedures, as discussed in table 3. Further, table 3 identifies two additional states that have non- statutory, agency-initiated processes: California and Pennsylvania are “cross-validation” states. In one sense, Nevada’s screening process (see fig. 3) can be categorized as an agency-initiated process. That is, Nevada’s statute does not specifically require prescreening, but prescreening against the state’s sex offender registry is the method Nevada has adopted to implement the statutory prohibition on issuing driver’s licenses or identification cards to sex offenders who are not in compliance with registration requirements. In terms of the implementation information we developed, it appears that Nevada is unique in being the only state that uses its motor vehicle agency to prescreen all applicants against a sex offender registry before issuing a driver’s license or identification card.
To enhance public safety, all states have laws requiring convicted sex offenders to register with law enforcement authorities. Because ensuring compliance is a challenge, in part because offenders may move frequently, policy makers are considering a role for motor vehicle agencies. In response to section 636 of the Adam Walsh Child Protection and Safety Act of 2006 (the Walsh Act) and as discussed with congressional committees, this report identifies (1) the various driver's license-related processes that states are using to encourage registration or provide additional monitoring of convicted sex offenders; (2) the level of modifications to states' information technology (IT) capabilities that would be needed, and the key cost factors involved, if a federal law were to require the screening of individuals against the respective state's sex offender registry and the Federal Bureau of Investigation's (FBI) National Sex Offender Registry before issuing a driver's license; and (3) other factors that could affect successful implementation of this type of screening program. To accomplish these objectives, GAO reviewed state statutes and surveyed motor vehicle and public safety agencies in 26 states. The 26 states reflect regional representation, among other factors. GAO also interviewed officials from various components in the Department of Justice (DOJ) and the American Association of Motor Vehicle Administrators (AAMVA). GAO is not making any recommendations in this report. As of July 2007, 22 of the nation's 50 states were using some form of driver's license-related process to encourage registration or provide additional monitoring of convicted sex offenders. For example, nine states specifically require convicted sex offenders to obtain a driver's license, an identification card, or a sex offender registration card issued through driver's license-related processes, and five of these nine states also label the respective document with an annotation that identifies the person as a sex offender. One of the 22 states--Nevada--has a process for screening every driver's license applicant against the state's sex offender registry before issuing a license. However, no state has a screening process whereby all applicants are screened against both the respective state's sex offender registry and the FBI's national registry before being issued a driver's license. To establish this type of screening process, most of the motor vehicle agencies and sex offender registries in the 26 states surveyed by GAO said that moderate to major modifications to their current IT systems would be needed, with software modifications being a key cost factor. Many of the responding state agencies indicated that before reliable cost estimates for this type of screening process could be developed, operational or functional requirements must be clearly defined. Moreover, a recurring observation by motor vehicle agency officials was that given competing demands for programming resources, the agencies were not positioned to handle additional projects during the next several years. In addition to addressing IT and cost issues, successful implementation of a driver's license screening program for sex offenders will also hinge on how well the program incorporates key design considerations. Developing an effective "one-size-fits-all" screening program could be a daunting challenge given the different processes, procedures, databases, and operational environments among the motor vehicle and law enforcement agencies across the nation. If the federal government were to require this type of screening process, several key design factors could affect the outcomes of the process. Among other considerations cited by federal, state, and AAMVA officials, particularly important are design factors aimed at minimizing the burden on states, maintaining customer service at motor vehicle agencies, and mitigating unintended consequences. Although not an exhaustive list, these design considerations could affect the results from and the costs of a nationwide screening program. Decisions on the most optimal approach to pursue--and, if applicable, how best to integrate the design considerations discussed in GAO's report--likely would necessitate collaboration among various stakeholders, including interested states, AAMVA, and the FBI, which manages the national sex offender registry. In commenting on a draft copy of this report, DOJ and AAMVA provided technical clarifications, which GAO incorporated where appropriate.